Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.19-rc6).

No conflicts, or adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2570 -1994
+5
.mailmap
··· 473 473 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> 474 474 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de> 475 475 Linus Lüssing <linus.luessing@c0d3.blue> <ll@simonwunderlich.de> 476 + Linus Walleij <linusw@kernel.org> <linus.walleij@ericsson.com> 477 + Linus Walleij <linusw@kernel.org> <linus.walleij@stericsson.com> 478 + Linus Walleij <linusw@kernel.org> <linus.walleij@linaro.org> 479 + Linus Walleij <linusw@kernel.org> <triad@df.lth.se> 476 480 <linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com> 477 481 Li Yang <leoyang.li@nxp.com> <leoli@freescale.com> 478 482 Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org> ··· 801 797 Tejun Heo <htejun@gmail.com> 802 798 Tomeu Vizoso <tomeu@tomeuvizoso.net> <tomeu.vizoso@collabora.com> 803 799 Thomas Graf <tgraf@suug.ch> 800 + Thomas Gleixner <tglx@kernel.org> <tglx@linutronix.de> 804 801 Thomas Körper <socketcan@esd.eu> <thomas.koerper@esd.eu> 805 802 Thomas Pedersen <twp@codeaurora.org> 806 803 Thorsten Blum <thorsten.blum@linux.dev> <thorsten.blum@toblux.com>
+1 -1
CREDITS
··· 1398 1398 P: 1024D/8399E1BB 250D 3BCF 7127 0D8C A444 A961 1DBD 5E75 8399 E1BB 1399 1399 1400 1400 N: Thomas Gleixner 1401 - E: tglx@linutronix.de 1401 + E: tglx@kernel.org 1402 1402 D: NAND flash hardware support, JFFS2 on NAND flash 1403 1403 1404 1404 N: Jérôme Glisse
+1 -1
Documentation/ABI/stable/sysfs-kernel-time-aux-clocks
··· 1 1 What: /sys/kernel/time/aux_clocks/<ID>/enable 2 2 Date: May 2025 3 - Contact: Thomas Gleixner <tglx@linutronix.de> 3 + Contact: Thomas Gleixner <tglx@kernel.org> 4 4 Description: 5 5 Controls the enablement of auxiliary clock timekeepers.
+2 -2
Documentation/ABI/testing/sysfs-devices-soc
··· 17 17 contact: Lee Jones <lee@kernel.org> 18 18 Description: 19 19 Read-only attribute common to all SoCs. Contains the SoC machine 20 - name (e.g. Ux500). 20 + name (e.g. DB8500). 21 21 22 22 What: /sys/devices/socX/family 23 23 Date: January 2012 24 24 contact: Lee Jones <lee@kernel.org> 25 25 Description: 26 26 Read-only attribute common to all SoCs. Contains SoC family name 27 - (e.g. DB8500). 27 + (e.g. ux500). 28 28 29 29 On many of ARM based silicon with SMCCC v1.2+ compliant firmware 30 30 this will contain the JEDEC JEP106 manufacturer’s identification
+8
Documentation/admin-guide/sysctl/net.rst
··· 303 303 Maximum number of packets, queued on the INPUT side, when the interface 304 304 receives packets faster than kernel can process them. 305 305 306 + qdisc_max_burst 307 + ------------------ 308 + 309 + Maximum number of packets that can be temporarily stored before 310 + reaching qdisc. 311 + 312 + Default: 1000 313 + 306 314 netdev_rss_key 307 315 -------------- 308 316
+1 -1
Documentation/arch/x86/topology.rst
··· 17 17 Needless to say, code should use the generic functions - this file is *only* 18 18 here to *document* the inner workings of x86 topology. 19 19 20 - Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>. 20 + Started by Thomas Gleixner <tglx@kernel.org> and Borislav Petkov <bp@alien8.de>. 21 21 22 22 The main aim of the topology facilities is to present adequate interfaces to 23 23 code which needs to know/query/use the structure of the running system wrt
+1 -1
Documentation/core-api/cpu_hotplug.rst
··· 8 8 Srivatsa Vaddagiri <vatsa@in.ibm.com>, 9 9 Ashok Raj <ashok.raj@intel.com>, 10 10 Joel Schopp <jschopp@austin.ibm.com>, 11 - Thomas Gleixner <tglx@linutronix.de> 11 + Thomas Gleixner <tglx@kernel.org> 12 12 13 13 Introduction 14 14 ============
+1 -1
Documentation/core-api/genericirq.rst
··· 439 439 440 440 The following people have contributed to this document: 441 441 442 - 1. Thomas Gleixner tglx@linutronix.de 442 + 1. Thomas Gleixner tglx@kernel.org 443 443 444 444 2. Ingo Molnar mingo@elte.hu
+1 -1
Documentation/core-api/librs.rst
··· 209 209 210 210 The following people have contributed to this document: 211 211 212 - Thomas Gleixner\ tglx@linutronix.de 212 + Thomas Gleixner\ tglx@kernel.org
+8 -1
Documentation/devicetree/bindings/arm/fsl.yaml
··· 1105 1105 - gateworks,imx8mp-gw74xx # i.MX8MP Gateworks Board 1106 1106 - gateworks,imx8mp-gw75xx-2x # i.MX8MP Gateworks Board 1107 1107 - gateworks,imx8mp-gw82xx-2x # i.MX8MP Gateworks Board 1108 - - gocontroll,moduline-display # GOcontroll Moduline Display controller 1109 1108 - prt,prt8ml # Protonic PRT8ML 1110 1109 - skov,imx8mp-skov-basic # SKOV i.MX8MP baseboard without frontplate 1111 1110 - skov,imx8mp-skov-revb-hdmi # SKOV i.MX8MP climate control without panel ··· 1161 1162 - enum: 1162 1163 - engicam,icore-mx8mp-edimm2.2 # i.MX8MP Engicam i.Core MX8M Plus EDIMM2.2 Starter Kit 1163 1164 - const: engicam,icore-mx8mp # i.MX8MP Engicam i.Core MX8M Plus SoM 1165 + - const: fsl,imx8mp 1166 + 1167 + - description: Ka-Ro TX8P-ML81 SoM based boards 1168 + items: 1169 + - enum: 1170 + - gocontroll,moduline-display 1171 + - gocontroll,moduline-display-106 1172 + - const: karo,tx8p-ml81 1164 1173 - const: fsl,imx8mp 1165 1174 1166 1175 - description: Kontron i.MX8MP OSM-S SoM based Boards
+7 -1
Documentation/devicetree/bindings/misc/pci1de4,1.yaml
··· 25 25 items: 26 26 - const: pci1de4,1 27 27 28 + reg: 29 + maxItems: 1 30 + description: The PCI Bus-Device-Function address. 31 + 28 32 '#interrupt-cells': 29 33 const: 2 30 34 description: | ··· 105 101 106 102 required: 107 103 - compatible 104 + - reg 108 105 - '#interrupt-cells' 109 106 - interrupt-controller 110 107 - pci-ep-bus@1 ··· 116 111 #address-cells = <3>; 117 112 #size-cells = <2>; 118 113 119 - rp1@0,0 { 114 + dev@0,0 { 120 115 compatible = "pci1de4,1"; 116 + reg = <0x10000 0x0 0x0 0x0 0x0>; 121 117 ranges = <0x01 0x00 0x00000000 0x82010000 0x00 0x00 0x00 0x400000>; 122 118 #address-cells = <3>; 123 119 #size-cells = <2>;
+1 -1
Documentation/devicetree/bindings/timer/mrvl,mmp-timer.yaml
··· 8 8 9 9 maintainers: 10 10 - Daniel Lezcano <daniel.lezcano@linaro.org> 11 - - Thomas Gleixner <tglx@linutronix.de> 11 + - Thomas Gleixner <tglx@kernel.org> 12 12 - Rob Herring <robh@kernel.org> 13 13 14 14 properties:
+2 -2
Documentation/devicetree/bindings/ufs/ufs-common.yaml
··· 48 48 enum: [1, 2] 49 49 default: 2 50 50 description: 51 - Number of lanes available per direction. Note that it is assume same 52 - number of lanes is used both directions at once. 51 + Number of lanes available per direction. Note that it is assumed that 52 + the same number of lanes are used in both directions at once. 53 53 54 54 vdd-hba-supply: 55 55 description:
+2 -2
Documentation/driver-api/mtdnand.rst
··· 996 996 997 997 2. David Woodhouse\ dwmw2@infradead.org 998 998 999 - 3. Thomas Gleixner\ tglx@linutronix.de 999 + 3. Thomas Gleixner\ tglx@kernel.org 1000 1000 1001 1001 A lot of users have provided bugfixes, improvements and helping hands 1002 1002 for testing. Thanks a lot. 1003 1003 1004 1004 The following people have contributed to this document: 1005 1005 1006 - 1. Thomas Gleixner\ tglx@linutronix.de 1006 + 1. Thomas Gleixner\ tglx@kernel.org
+1
Documentation/filesystems/locking.rst
··· 416 416 lm_breaker_owns_lease: yes no no 417 417 lm_lock_expirable yes no no 418 418 lm_expire_lock no no yes 419 + lm_open_conflict yes no no 419 420 ====================== ============= ================= ========= 420 421 421 422 buffer_head
+6 -4
Documentation/process/maintainer-soc.rst
··· 57 57 58 58 All typical platform related patches should be sent via SoC submaintainers 59 59 (platform-specific maintainers). This includes also changes to per-platform or 60 - shared defconfigs (scripts/get_maintainer.pl might not provide correct 61 - addresses in such case). 60 + shared defconfigs. Note that scripts/get_maintainer.pl might not provide 61 + correct addresses for the shared defconfig, so ignore its output and manually 62 + create CC-list based on MAINTAINERS file or use something like 63 + ``scripts/get_maintainer.pl -f drivers/soc/FOO/``). 62 64 63 65 Submitting Patches to the Main SoC Maintainers 64 66 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 116 114 Usually the branch that includes a driver change will also include the 117 115 corresponding change to the devicetree binding description, to ensure they are 118 116 in fact compatible. This means that the devicetree branch can end up causing 119 - warnings in the "make dtbs_check" step. If a devicetree change depends on 117 + warnings in the ``make dtbs_check`` step. If a devicetree change depends on 120 118 missing additions to a header file in include/dt-bindings/, it will fail the 121 - "make dtbs" step and not get merged. 119 + ``make dtbs`` step and not get merged. 122 120 123 121 There are multiple ways to deal with this: 124 122
+1 -1
Documentation/translations/zh_CN/core-api/cpu_hotplug.rst
··· 22 22 Srivatsa Vaddagiri <vatsa@in.ibm.com>, 23 23 Ashok Raj <ashok.raj@intel.com>, 24 24 Joel Schopp <jschopp@austin.ibm.com>, 25 - Thomas Gleixner <tglx@linutronix.de> 25 + Thomas Gleixner <tglx@kernel.org> 26 26 27 27 简介 28 28 ====
+1 -1
Documentation/translations/zh_CN/core-api/genericirq.rst
··· 404 404 405 405 感谢以下人士对本文档作出的贡献: 406 406 407 - 1. Thomas Gleixner tglx@linutronix.de 407 + 1. Thomas Gleixner tglx@kernel.org 408 408 409 409 2. Ingo Molnar mingo@elte.hu
+1 -1
Documentation/userspace-api/media/v4l/metafmt-arm-mali-c55.rst
··· 44 44 struct v4l2_isp_params_buffer *params = 45 45 (struct v4l2_isp_params_buffer *)buffer; 46 46 47 - params->version = MALI_C55_PARAM_BUFFER_V1; 47 + params->version = V4L2_ISP_PARAMS_VERSION_V1; 48 48 params->data_size = 0; 49 49 50 50 void *data = (void *)params->data;
+32 -25
MAINTAINERS
··· 2012 2012 M: Arnd Bergmann <arnd@arndb.de> 2013 2013 M: Krzysztof Kozlowski <krzk@kernel.org> 2014 2014 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 2015 - M: Linus Walleij <linus.walleij@linaro.org> 2015 + M: Linus Walleij <linusw@kernel.org> 2016 2016 R: Drew Fustini <fustini@kernel.org> 2017 2017 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2018 2018 L: soc@lists.linux.dev ··· 2159 2159 L: dri-devel@lists.freedesktop.org 2160 2160 S: Supported 2161 2161 W: https://rust-for-linux.com/tyr-gpu-driver 2162 - W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2162 + W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2163 2163 B: https://gitlab.freedesktop.org/panfrost/linux/-/issues 2164 2164 T: git https://gitlab.freedesktop.org/drm/rust/kernel.git 2165 2165 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml ··· 5802 5802 5803 5803 CEPH COMMON CODE (LIBCEPH) 5804 5804 M: Ilya Dryomov <idryomov@gmail.com> 5805 - M: Xiubo Li <xiubli@redhat.com> 5805 + M: Alex Markuze <amarkuze@redhat.com> 5806 + M: Viacheslav Dubeyko <slava@dubeyko.com> 5806 5807 L: ceph-devel@vger.kernel.org 5807 5808 S: Supported 5808 5809 W: http://ceph.com/ ··· 5814 5813 F: net/ceph/ 5815 5814 5816 5815 CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH) 5817 - M: Xiubo Li <xiubli@redhat.com> 5818 5816 M: Ilya Dryomov <idryomov@gmail.com> 5817 + M: Alex Markuze <amarkuze@redhat.com> 5818 + M: Viacheslav Dubeyko <slava@dubeyko.com> 5819 5819 L: ceph-devel@vger.kernel.org 5820 5820 S: Supported 5821 5821 W: http://ceph.com/ ··· 6175 6173 6176 6174 CLOCKSOURCE, CLOCKEVENT DRIVERS 6177 6175 M: Daniel Lezcano <daniel.lezcano@linaro.org> 6178 - M: Thomas Gleixner <tglx@linutronix.de> 6176 + M: Thomas Gleixner <tglx@kernel.org> 6179 6177 L: linux-kernel@vger.kernel.org 6180 6178 S: Supported 6181 6179 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 6541 6539 F: drivers/cpufreq/virtual-cpufreq.c 6542 6540 6543 6541 CPU HOTPLUG 6544 - M: Thomas Gleixner <tglx@linutronix.de> 6542 + M: Thomas Gleixner <tglx@kernel.org> 6545 6543 M: Peter Zijlstra <peterz@infradead.org> 6546 6544 L: linux-kernel@vger.kernel.org 6547 6545 S: Maintained ··· 6708 6706 T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-next 6709 6707 T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-fixes 6710 6708 F: lib/crypto/ 6709 + F: scripts/crypto/ 6711 6710 6712 6711 CRYPTO SPEED TEST COMPARE 6713 6712 M: Wang Jinchao <wangjinchao@xfusion.com> ··· 6969 6966 F: drivers/scsi/dc395x.* 6970 6967 6971 6968 DEBUGOBJECTS: 6972 - M: Thomas Gleixner <tglx@linutronix.de> 6969 + M: Thomas Gleixner <tglx@kernel.org> 6973 6970 L: linux-kernel@vger.kernel.org 6974 6971 S: Maintained 6975 6972 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects ··· 8071 8068 Q: https://patchwork.freedesktop.org/project/nouveau/ 8072 8069 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8073 8070 C: irc://irc.oftc.net/nouveau 8074 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8071 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8075 8072 F: Documentation/gpu/nova/ 8076 8073 F: drivers/gpu/nova-core/ 8077 8074 ··· 8083 8080 Q: https://patchwork.freedesktop.org/project/nouveau/ 8084 8081 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8085 8082 C: irc://irc.oftc.net/nouveau 8086 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8083 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8087 8084 F: Documentation/gpu/nova/ 8088 8085 F: drivers/gpu/drm/nova/ 8089 8086 F: include/uapi/drm/nova_drm.h ··· 8361 8358 X: drivers/gpu/drm/nova/ 8362 8359 X: drivers/gpu/drm/radeon/ 8363 8360 X: drivers/gpu/drm/tegra/ 8361 + X: drivers/gpu/drm/tyr/ 8364 8362 X: drivers/gpu/drm/xe/ 8365 8363 8366 8364 DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST] ··· 10375 10371 F: tools/testing/selftests/filesystems/fuse/ 10376 10372 10377 10373 FUTEX SUBSYSTEM 10378 - M: Thomas Gleixner <tglx@linutronix.de> 10374 + M: Thomas Gleixner <tglx@kernel.org> 10379 10375 M: Ingo Molnar <mingo@redhat.com> 10380 10376 R: Peter Zijlstra <peterz@infradead.org> 10381 10377 R: Darren Hart <dvhart@infradead.org> ··· 10519 10515 F: include/linux/arch_topology.h 10520 10516 10521 10517 GENERIC ENTRY CODE 10522 - M: Thomas Gleixner <tglx@linutronix.de> 10518 + M: Thomas Gleixner <tglx@kernel.org> 10523 10519 M: Peter Zijlstra <peterz@infradead.org> 10524 10520 M: Andy Lutomirski <luto@kernel.org> 10525 10521 L: linux-kernel@vger.kernel.org ··· 10632 10628 10633 10629 GENERIC VDSO LIBRARY 10634 10630 M: Andy Lutomirski <luto@kernel.org> 10635 - M: Thomas Gleixner <tglx@linutronix.de> 10631 + M: Thomas Gleixner <tglx@kernel.org> 10636 10632 M: Vincenzo Frascino <vincenzo.frascino@arm.com> 10637 10633 L: linux-kernel@vger.kernel.org 10638 10634 S: Maintained ··· 11245 11241 HIGH-RESOLUTION TIMERS, TIMER WHEEL, CLOCKEVENTS 11246 11242 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 11247 11243 M: Frederic Weisbecker <frederic@kernel.org> 11248 - M: Thomas Gleixner <tglx@linutronix.de> 11244 + M: Thomas Gleixner <tglx@kernel.org> 11249 11245 L: linux-kernel@vger.kernel.org 11250 11246 S: Maintained 11251 11247 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 11268 11264 R: FUJITA Tomonori <fujita.tomonori@gmail.com> 11269 11265 R: Frederic Weisbecker <frederic@kernel.org> 11270 11266 R: Lyude Paul <lyude@redhat.com> 11271 - R: Thomas Gleixner <tglx@linutronix.de> 11267 + R: Thomas Gleixner <tglx@kernel.org> 11272 11268 R: Anna-Maria Behnsen <anna-maria@linutronix.de> 11273 11269 R: John Stultz <jstultz@google.com> 11274 11270 R: Stephen Boyd <sboyd@kernel.org> ··· 13338 13334 F: sound/soc/codecs/sma* 13339 13335 13340 13336 IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY) 13341 - M: Thomas Gleixner <tglx@linutronix.de> 13337 + M: Thomas Gleixner <tglx@kernel.org> 13342 13338 S: Maintained 13343 13339 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core 13344 13340 F: Documentation/core-api/irq/irq-domain.rst ··· 13348 13344 F: kernel/irq/msi.c 13349 13345 13350 13346 IRQ SUBSYSTEM 13351 - M: Thomas Gleixner <tglx@linutronix.de> 13347 + M: Thomas Gleixner <tglx@kernel.org> 13352 13348 L: linux-kernel@vger.kernel.org 13353 13349 S: Maintained 13354 13350 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core ··· 13361 13357 F: lib/group_cpus.c 13362 13358 13363 13359 IRQCHIP DRIVERS 13364 - M: Thomas Gleixner <tglx@linutronix.de> 13360 + M: Thomas Gleixner <tglx@kernel.org> 13365 13361 L: linux-kernel@vger.kernel.org 13366 13362 S: Maintained 13367 13363 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core ··· 14455 14451 F: lib/* 14456 14452 14457 14453 LICENSES and SPDX stuff 14458 - M: Thomas Gleixner <tglx@linutronix.de> 14454 + M: Thomas Gleixner <tglx@kernel.org> 14459 14455 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14460 14456 L: linux-spdx@vger.kernel.org 14461 14457 S: Maintained ··· 14883 14879 M: Sathya Prakash <sathya.prakash@broadcom.com> 14884 14880 M: Sreekanth Reddy <sreekanth.reddy@broadcom.com> 14885 14881 M: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com> 14882 + M: Ranjan Kumar <ranjan.kumar@broadcom.com> 14886 14883 L: MPT-FusionLinux.pdl@broadcom.com 14887 14884 L: linux-scsi@vger.kernel.org 14888 14885 S: Supported ··· 18441 18436 M: Sabrina Dubroca <sd@queasysnail.net> 18442 18437 L: netdev@vger.kernel.org 18443 18438 S: Maintained 18439 + F: Documentation/networking/tls* 18444 18440 F: include/net/tls.h 18445 18441 F: include/uapi/linux/tls.h 18446 - F: net/tls/* 18442 + F: net/tls/ 18443 + F: tools/testing/selftests/net/tls.c 18447 18444 18448 18445 NETWORKING [SOCKETS] 18449 18446 M: Eric Dumazet <edumazet@google.com> ··· 18597 18590 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 18598 18591 M: Frederic Weisbecker <frederic@kernel.org> 18599 18592 M: Ingo Molnar <mingo@kernel.org> 18600 - M: Thomas Gleixner <tglx@linutronix.de> 18593 + M: Thomas Gleixner <tglx@kernel.org> 18601 18594 L: linux-kernel@vger.kernel.org 18602 18595 S: Maintained 18603 18596 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/nohz ··· 20782 20775 POSIX CLOCKS and TIMERS 20783 20776 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 20784 20777 M: Frederic Weisbecker <frederic@kernel.org> 20785 - M: Thomas Gleixner <tglx@linutronix.de> 20778 + M: Thomas Gleixner <tglx@kernel.org> 20786 20779 L: linux-kernel@vger.kernel.org 20787 20780 S: Maintained 20788 20781 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 26294 26287 26295 26288 TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER 26296 26289 M: John Stultz <jstultz@google.com> 26297 - M: Thomas Gleixner <tglx@linutronix.de> 26290 + M: Thomas Gleixner <tglx@kernel.org> 26298 26291 R: Stephen Boyd <sboyd@kernel.org> 26299 26292 L: linux-kernel@vger.kernel.org 26300 26293 S: Supported ··· 28225 28218 F: net/x25/ 28226 28219 28227 28220 X86 ARCHITECTURE (32-BIT AND 64-BIT) 28228 - M: Thomas Gleixner <tglx@linutronix.de> 28221 + M: Thomas Gleixner <tglx@kernel.org> 28229 28222 M: Ingo Molnar <mingo@redhat.com> 28230 28223 M: Borislav Petkov <bp@alien8.de> 28231 28224 M: Dave Hansen <dave.hansen@linux.intel.com> ··· 28241 28234 28242 28235 X86 CPUID DATABASE 28243 28236 M: Borislav Petkov <bp@alien8.de> 28244 - M: Thomas Gleixner <tglx@linutronix.de> 28237 + M: Thomas Gleixner <tglx@kernel.org> 28245 28238 M: x86@kernel.org 28246 28239 R: Ahmed S. Darwish <darwi@linutronix.de> 28247 28240 L: x86-cpuid@lists.linux.dev ··· 28257 28250 F: arch/x86/entry/ 28258 28251 28259 28252 X86 HARDWARE VULNERABILITIES 28260 - M: Thomas Gleixner <tglx@linutronix.de> 28253 + M: Thomas Gleixner <tglx@kernel.org> 28261 28254 M: Borislav Petkov <bp@alien8.de> 28262 28255 M: Peter Zijlstra <peterz@infradead.org> 28263 28256 M: Josh Poimboeuf <jpoimboe@kernel.org>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+11
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr-ac.dts
··· 12 12 model = "Actiontec MI424WR rev A/C"; 13 13 compatible = "actiontec,mi424wr-ac", "intel,ixp42x"; 14 14 15 + /* Connect the switch to EthC */ 16 + spi { 17 + ethernet-switch@0 { 18 + ethernet-ports { 19 + ethernet-port@4 { 20 + ethernet = <&ethc>; 21 + }; 22 + }; 23 + }; 24 + }; 25 + 15 26 soc { 16 27 /* EthB used for WAN */ 17 28 ethernet@c8009000 {
+11
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr-d.dts
··· 12 12 model = "Actiontec MI424WR rev D"; 13 13 compatible = "actiontec,mi424wr-d", "intel,ixp42x"; 14 14 15 + /* Connect the switch to EthB */ 16 + spi { 17 + ethernet-switch@0 { 18 + ethernet-ports { 19 + ethernet-port@4 { 20 + ethernet = <&ethb>; 21 + }; 22 + }; 23 + }; 24 + }; 25 + 15 26 soc { 16 27 /* EthB used for LAN */ 17 28 ethernet@c8009000 {
-1
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr.dtsi
··· 152 152 }; 153 153 ethernet-port@4 { 154 154 reg = <4>; 155 - ethernet = <&ethc>; 156 155 phy-mode = "mii"; 157 156 fixed-link { 158 157 speed = <100>;
+4 -4
arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts
··· 248 248 linux,default-trigger = "nand-disk"; 249 249 }; 250 250 251 - ledg3: led@10 { 252 - reg = <10>; 251 + ledg3: led@a { 252 + reg = <0xa>; 253 253 label = "system:green3:live"; 254 254 linux,default-trigger = "heartbeat"; 255 255 }; 256 256 257 - ledb3: led@11 { 258 - reg = <11>; 257 + ledb3: led@b { 258 + reg = <0xb>; 259 259 label = "system:blue3:cpu"; 260 260 linux,default-trigger = "cpu0"; 261 261 };
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-rdu1.dts
··· 398 398 #size-cells = <0>; 399 399 led-control = <0x0 0x0 0x3f83f8 0x0>; 400 400 401 - sysled0@3 { 401 + led@3 { 402 402 reg = <3>; 403 403 label = "system:green:status"; 404 404 linux,default-trigger = "default-on"; 405 405 }; 406 406 407 - sysled1@4 { 407 + led@4 { 408 408 reg = <4>; 409 409 label = "system:green:act"; 410 410 linux,default-trigger = "heartbeat";
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-scu2-mezz.dts
··· 225 225 #size-cells = <0>; 226 226 led-control = <0x0 0x0 0x3f83f8 0x0>; 227 227 228 - sysled3: led3@3 { 228 + sysled3: led@3 { 229 229 reg = <3>; 230 230 label = "system:red:power"; 231 231 linux,default-trigger = "default-on"; 232 232 }; 233 233 234 - sysled4: led4@4 { 234 + sysled4: led@4 { 235 235 reg = <4>; 236 236 label = "system:green:act"; 237 237 linux,default-trigger = "heartbeat";
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-scu3-esb.dts
··· 153 153 #size-cells = <0>; 154 154 led-control = <0x0 0x0 0x3f83f8 0x0>; 155 155 156 - sysled3: led3@3 { 156 + sysled3: led@3 { 157 157 reg = <3>; 158 158 label = "system:red:power"; 159 159 linux,default-trigger = "default-on"; 160 160 }; 161 161 162 - sysled4: led4@4 { 162 + sysled4: led@4 { 163 163 reg = <4>; 164 164 label = "system:green:act"; 165 165 linux,default-trigger = "heartbeat";
+1 -1
arch/arm/boot/dts/nxp/imx/imx6q-ba16.dtsi
··· 337 337 pinctrl-0 = <&pinctrl_rtc>; 338 338 reg = <0x32>; 339 339 interrupt-parent = <&gpio4>; 340 - interrupts = <10 IRQ_TYPE_LEVEL_HIGH>; 340 + interrupts = <10 IRQ_TYPE_LEVEL_LOW>; 341 341 }; 342 342 }; 343 343
+1 -3
arch/arm64/boot/dts/broadcom/Makefile
··· 7 7 bcm2711-rpi-4-b.dtb \ 8 8 bcm2711-rpi-cm4-io.dtb \ 9 9 bcm2712-rpi-5-b.dtb \ 10 - bcm2712-rpi-5-b-ovl-rp1.dtb \ 11 10 bcm2712-d-rpi-5-b.dtb \ 12 11 bcm2837-rpi-2-b.dtb \ 13 12 bcm2837-rpi-3-a-plus.dtb \ 14 13 bcm2837-rpi-3-b.dtb \ 15 14 bcm2837-rpi-3-b-plus.dtb \ 16 15 bcm2837-rpi-cm3-io3.dtb \ 17 - bcm2837-rpi-zero-2-w.dtb \ 18 - rp1.dtbo 16 + bcm2837-rpi-zero-2-w.dtb 19 17 20 18 subdir-y += bcmbca 21 19 subdir-y += northstar2
arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b-ovl-rp1.dts arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b-base.dtsi
+26 -13
arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b.dts
··· 1 1 // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 2 /* 3 - * bcm2712-rpi-5-b-ovl-rp1.dts is the overlay-ready DT which will make 4 - * the RP1 driver to load the RP1 dtb overlay at runtime, while 5 - * bcm2712-rpi-5-b.dts (this file) is the fully defined one (i.e. it 6 - * already contains RP1 node, so no overlay is loaded nor needed). 7 - * This file is intended to host the override nodes for the RP1 peripherals, 8 - * e.g. to declare the phy of the ethernet interface or the custom pin setup 9 - * for several RP1 peripherals. 10 - * This in turn is due to the fact that there's no current generic 11 - * infrastructure to reference nodes (i.e. the nodes in rp1-common.dtsi) that 12 - * are not yet defined in the DT since they are loaded at runtime via overlay. 3 + * As a loose attempt to separate RP1 customizations from SoC peripherals 4 + * definitioni, this file is intended to host the override nodes for the RP1 5 + * peripherals, e.g. to declare the phy of the ethernet interface or custom 6 + * pin setup. 13 7 * All other nodes that do not have anything to do with RP1 should be added 14 - * to the included bcm2712-rpi-5-b-ovl-rp1.dts instead. 8 + * to the included bcm2712-rpi-5-b-base.dtsi instead. 15 9 */ 16 10 17 11 /dts-v1/; 18 12 19 - #include "bcm2712-rpi-5-b-ovl-rp1.dts" 13 + #include "bcm2712-rpi-5-b-base.dtsi" 20 14 21 15 / { 22 16 aliases { ··· 19 25 }; 20 26 21 27 &pcie2 { 22 - #include "rp1-nexus.dtsi" 28 + pci@0,0 { 29 + reg = <0x0 0x0 0x0 0x0 0x0>; 30 + ranges; 31 + bus-range = <0 1>; 32 + device_type = "pci"; 33 + #address-cells = <3>; 34 + #size-cells = <2>; 35 + 36 + dev@0,0 { 37 + compatible = "pci1de4,1"; 38 + reg = <0x10000 0x0 0x0 0x0 0x0>; 39 + ranges = <0x1 0x0 0x0 0x82010000 0x0 0x0 0x0 0x400000>; 40 + interrupt-controller; 41 + #interrupt-cells = <2>; 42 + #address-cells = <3>; 43 + #size-cells = <2>; 44 + 45 + #include "rp1-common.dtsi" 46 + }; 47 + }; 23 48 }; 24 49 25 50 &rp1_eth {
-14
arch/arm64/boot/dts/broadcom/rp1-nexus.dtsi
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - rp1_nexus { 4 - compatible = "pci1de4,1"; 5 - #address-cells = <3>; 6 - #size-cells = <2>; 7 - ranges = <0x01 0x00 0x00000000 8 - 0x02000000 0x00 0x00000000 9 - 0x0 0x400000>; 10 - interrupt-controller; 11 - #interrupt-cells = <2>; 12 - 13 - #include "rp1-common.dtsi" 14 - };
-11
arch/arm64/boot/dts/broadcom/rp1.dtso
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - /dts-v1/; 4 - /plugin/; 5 - 6 - &pcie2 { 7 - #address-cells = <3>; 8 - #size-cells = <2>; 9 - 10 - #include "rp1-nexus.dtsi" 11 - };
+1
arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
··· 113 113 ethphy0f: ethernet-phy@1 { /* SMSC LAN8740Ai */ 114 114 compatible = "ethernet-phy-id0007.c110", 115 115 "ethernet-phy-ieee802.3-c22"; 116 + clocks = <&clk IMX8MP_CLK_ENET_QOS>; 116 117 interrupt-parent = <&gpio3>; 117 118 interrupts = <19 IRQ_TYPE_LEVEL_LOW>; 118 119 pinctrl-0 = <&pinctrl_ethphy0>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-tx8p-ml81-moduline-display-106.dts
··· 9 9 #include "imx8mp-tx8p-ml81.dtsi" 10 10 11 11 / { 12 - compatible = "gocontroll,moduline-display", "fsl,imx8mp"; 12 + compatible = "gocontroll,moduline-display-106", "karo,tx8p-ml81", "fsl,imx8mp"; 13 13 chassis-type = "embedded"; 14 14 hardware = "Moduline Display V1.06"; 15 15 model = "GOcontroll Moduline Display baseboard";
+5
arch/arm64/boot/dts/freescale/imx8mp-tx8p-ml81.dtsi
··· 47 47 <&clk IMX8MP_SYS_PLL2_100M>, 48 48 <&clk IMX8MP_SYS_PLL2_50M>; 49 49 assigned-clock-rates = <266000000>, <100000000>, <50000000>; 50 + nvmem-cells = <&eth_mac1>; 50 51 phy-handle = <&ethphy0>; 51 52 phy-mode = "rmii"; 52 53 pinctrl-0 = <&pinctrl_eqos>; ··· 74 73 smsc,disable-energy-detect; 75 74 }; 76 75 }; 76 + }; 77 + 78 + &fec { 79 + nvmem-cells = <&eth_mac2>; 77 80 }; 78 81 79 82 &gpio1 {
+2 -1
arch/arm64/boot/dts/freescale/imx8qm-mek.dts
··· 263 263 regulator-max-microvolt = <3000000>; 264 264 gpio = <&lsio_gpio4 7 GPIO_ACTIVE_HIGH>; 265 265 enable-active-high; 266 + off-on-delay-us = <4800>; 266 267 }; 267 268 268 269 reg_audio: regulator-audio { ··· 577 576 compatible = "isil,isl29023"; 578 577 reg = <0x44>; 579 578 interrupt-parent = <&lsio_gpio4>; 580 - interrupts = <11 IRQ_TYPE_EDGE_FALLING>; 579 + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 581 580 }; 582 581 583 582 pressure-sensor@60 {
+4 -4
arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi
··· 172 172 173 173 &lpuart0 { 174 174 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 175 - dmas = <&edma2 13 0 0>, <&edma2 12 0 1>; 175 + dmas = <&edma2 12 0 FSL_EDMA_RX>, <&edma2 13 0 0>; 176 176 dma-names = "rx","tx"; 177 177 }; 178 178 179 179 &lpuart1 { 180 180 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 181 - dmas = <&edma2 15 0 0>, <&edma2 14 0 1>; 181 + dmas = <&edma2 14 0 FSL_EDMA_RX>, <&edma2 15 0 0>; 182 182 dma-names = "rx","tx"; 183 183 }; 184 184 185 185 &lpuart2 { 186 186 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 187 - dmas = <&edma2 17 0 0>, <&edma2 16 0 1>; 187 + dmas = <&edma2 16 0 FSL_EDMA_RX>, <&edma2 17 0 0>; 188 188 dma-names = "rx","tx"; 189 189 }; 190 190 191 191 &lpuart3 { 192 192 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 193 - dmas = <&edma2 19 0 0>, <&edma2 18 0 1>; 193 + dmas = <&edma2 18 0 FSL_EDMA_RX>, <&edma2 19 0 0>; 194 194 dma-names = "rx","tx"; 195 195 }; 196 196
+1 -3
arch/arm64/boot/dts/freescale/imx95-toradex-smarc.dtsi
··· 406 406 "", 407 407 "", 408 408 "", 409 - "", 410 - "", 411 409 "SMARC_SDIO_WP"; 412 410 }; 413 411 ··· 580 582 ethphy1: ethernet-phy@1 { 581 583 reg = <1>; 582 584 interrupt-parent = <&som_gpio_expander_1>; 583 - interrupts = <6 IRQ_TYPE_LEVEL_LOW>; 585 + interrupts = <6 IRQ_TYPE_EDGE_FALLING>; 584 586 ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>; 585 587 ti,tx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>; 586 588 };
+1 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 828 828 interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>; 829 829 #address-cells = <3>; 830 830 #size-cells = <0>; 831 - clocks = <&scmi_clk IMX95_CLK_BUSAON>, 831 + clocks = <&scmi_clk IMX95_CLK_BUSWAKEUP>, 832 832 <&scmi_clk IMX95_CLK_I3C2SLOW>; 833 833 clock-names = "pclk", "fast_clk"; 834 834 status = "disabled";
+1 -1
arch/arm64/boot/dts/freescale/mba8mx.dtsi
··· 192 192 reset-assert-us = <500000>; 193 193 reset-deassert-us = <500>; 194 194 interrupt-parent = <&expander2>; 195 - interrupts = <6 IRQ_TYPE_EDGE_FALLING>; 195 + interrupts = <6 IRQ_TYPE_LEVEL_LOW>; 196 196 }; 197 197 }; 198 198 };
-3
arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
··· 675 675 snps,lfps_filter_quirk; 676 676 snps,dis_u2_susphy_quirk; 677 677 snps,dis_u3_susphy_quirk; 678 - snps,tx_de_emphasis_quirk; 679 - snps,tx_de_emphasis = <1>; 680 678 snps,dis_enblslpm_quirk; 681 - snps,gctl-reset-quirk; 682 679 usb-role-switch; 683 680 role-switch-default-mode = "host"; 684 681 port {
+1 -1
arch/arm64/boot/dts/ti/k3-am62-lp-sk-nand.dtso
··· 14 14 }; 15 15 16 16 &main_pmx0 { 17 - gpmc0_pins_default: gpmc0-pins-default { 17 + gpmc0_pins_default: gpmc0-default-pins { 18 18 pinctrl-single,pins = < 19 19 AM62X_IOPAD(0x003c, PIN_INPUT, 0) /* (K19) GPMC0_AD0 */ 20 20 AM62X_IOPAD(0x0040, PIN_INPUT, 0) /* (L19) GPMC0_AD1 */
+2 -5
arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-peb-c-010.dtso
··· 30 30 <&main_pktdma 0xc206 15>, /* egress slice 1 */ 31 31 <&main_pktdma 0xc207 15>, /* egress slice 1 */ 32 32 <&main_pktdma 0x4200 15>, /* ingress slice 0 */ 33 - <&main_pktdma 0x4201 15>, /* ingress slice 1 */ 34 - <&main_pktdma 0x4202 0>, /* mgmnt rsp slice 0 */ 35 - <&main_pktdma 0x4203 0>; /* mgmnt rsp slice 1 */ 33 + <&main_pktdma 0x4201 15>; /* ingress slice 1 */ 36 34 dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3", 37 35 "tx1-0", "tx1-1", "tx1-2", "tx1-3", 38 - "rx0", "rx1", 39 - "rxmgm0", "rxmgm1"; 36 + "rx0", "rx1"; 40 37 41 38 firmware-name = "ti-pruss/am65x-sr2-pru0-prueth-fw.elf", 42 39 "ti-pruss/am65x-sr2-rtu0-prueth-fw.elf",
+4 -4
arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-x27-gpio1-spi1-uart3.dtso
··· 20 20 }; 21 21 22 22 &main_pmx0 { 23 - main_gpio1_exp_header_gpio_pins_default: main-gpio1-exp-header-gpio-pins-default { 23 + main_gpio1_exp_header_gpio_pins_default: main-gpio1-exp-header-gpio-default-pins { 24 24 pinctrl-single,pins = < 25 25 AM64X_IOPAD(0x0220, PIN_INPUT, 7) /* (D14) SPI1_CS1.GPIO1_48 */ 26 26 >; 27 27 }; 28 28 29 - main_spi1_pins_default: main-spi1-pins-default { 29 + main_spi1_pins_default: main-spi1-default-pins { 30 30 pinctrl-single,pins = < 31 31 AM64X_IOPAD(0x0224, PIN_INPUT, 0) /* (C14) SPI1_CLK */ 32 32 AM64X_IOPAD(0x021C, PIN_OUTPUT, 0) /* (B14) SPI1_CS0 */ ··· 35 35 >; 36 36 }; 37 37 38 - main_uart3_pins_default: main-uart3-pins-default { 38 + main_uart3_pins_default: main-uart3-default-pins { 39 39 pinctrl-single,pins = < 40 40 AM64X_IOPAD(0x0048, PIN_INPUT, 2) /* (U20) GPMC0_AD3.UART3_RXD */ 41 41 AM64X_IOPAD(0x004c, PIN_OUTPUT, 2) /* (U18) GPMC0_AD4.UART3_TXD */ ··· 52 52 &main_spi1 { 53 53 pinctrl-names = "default"; 54 54 pinctrl-0 = <&main_spi1_pins_default>; 55 - ti,pindir-d0-out-d1-in = <1>; 55 + ti,pindir-d0-out-d1-in; 56 56 status = "okay"; 57 57 }; 58 58
+1 -1
arch/arm64/include/asm/efi.h
··· 45 45 * switching to the EFI runtime stack. 46 46 */ 47 47 #define current_in_efi() \ 48 - (!preemptible() && efi_rt_stack_top != NULL && \ 48 + (efi_rt_stack_top != NULL && \ 49 49 on_task_stack(current, READ_ONCE(efi_rt_stack_top[-1]), 1)) 50 50 51 51 #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+1 -1
arch/arm64/include/asm/suspend.h
··· 2 2 #ifndef __ASM_SUSPEND_H 3 3 #define __ASM_SUSPEND_H 4 4 5 - #define NR_CTX_REGS 13 5 + #define NR_CTX_REGS 14 6 6 #define NR_CALLEE_SAVED_REGS 12 7 7 8 8 /*
+4 -2
arch/arm64/mm/pageattr.c
··· 171 171 */ 172 172 area = find_vm_area((void *)addr); 173 173 if (!area || 174 - end > (unsigned long)kasan_reset_tag(area->addr) + area->size || 174 + ((unsigned long)kasan_reset_tag((void *)end) > 175 + (unsigned long)kasan_reset_tag(area->addr) + area->size) || 175 176 ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) 176 177 return -EINVAL; 177 178 ··· 185 184 */ 186 185 if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || 187 186 pgprot_val(clear_mask) == PTE_RDONLY)) { 188 - unsigned long idx = (start - (unsigned long)kasan_reset_tag(area->addr)) 187 + unsigned long idx = ((unsigned long)kasan_reset_tag((void *)start) - 188 + (unsigned long)kasan_reset_tag(area->addr)) 189 189 >> PAGE_SHIFT; 190 190 for (; numpages; idx++, numpages--) { 191 191 ret = __change_memory_common((u64)page_address(area->pages[idx]),
+8
arch/arm64/mm/proc.S
··· 110 110 * call stack. 111 111 */ 112 112 str x18, [x0, #96] 113 + alternative_if ARM64_HAS_TCR2 114 + mrs x2, REG_TCR2_EL1 115 + str x2, [x0, #104] 116 + alternative_else_nop_endif 113 117 ret 114 118 SYM_FUNC_END(cpu_do_suspend) 115 119 ··· 148 144 msr tcr_el1, x8 149 145 msr vbar_el1, x9 150 146 msr mdscr_el1, x10 147 + alternative_if ARM64_HAS_TCR2 148 + ldr x2, [x0, #104] 149 + msr REG_TCR2_EL1, x2 150 + alternative_else_nop_endif 151 151 152 152 msr sctlr_el1, x12 153 153 set_this_cpu_offset x13
-4
arch/riscv/boot/Makefile
··· 31 31 32 32 endif 33 33 34 - ifdef CONFIG_RELOCATABLE 35 - $(obj)/Image: vmlinux.unstripped FORCE 36 - else 37 34 $(obj)/Image: vmlinux FORCE 38 - endif 39 35 $(call if_changed,objcopy) 40 36 41 37 $(obj)/Image.gz: $(obj)/Image FORCE
-2
arch/riscv/configs/nommu_k210_defconfig
··· 55 55 # CONFIG_HW_RANDOM is not set 56 56 # CONFIG_DEVMEM is not set 57 57 CONFIG_I2C=y 58 - # CONFIG_I2C_COMPAT is not set 59 58 CONFIG_I2C_CHARDEV=y 60 59 # CONFIG_I2C_HELPER_AUTO is not set 61 60 CONFIG_I2C_DESIGNWARE_CORE=y ··· 88 89 # CONFIG_FRAME_POINTER is not set 89 90 # CONFIG_DEBUG_MISC is not set 90 91 CONFIG_PANIC_ON_OOPS=y 91 - # CONFIG_SCHED_DEBUG is not set 92 92 # CONFIG_RCU_TRACE is not set 93 93 # CONFIG_FTRACE is not set 94 94 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_k210_sdcard_defconfig
··· 86 86 # CONFIG_FRAME_POINTER is not set 87 87 # CONFIG_DEBUG_MISC is not set 88 88 CONFIG_PANIC_ON_OOPS=y 89 - # CONFIG_SCHED_DEBUG is not set 90 89 # CONFIG_RCU_TRACE is not set 91 90 # CONFIG_FTRACE is not set 92 91 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_virt_defconfig
··· 66 66 # CONFIG_MISC_FILESYSTEMS is not set 67 67 CONFIG_LSM="[]" 68 68 CONFIG_PRINTK_TIME=y 69 - # CONFIG_SCHED_DEBUG is not set 70 69 # CONFIG_RCU_TRACE is not set 71 70 # CONFIG_FTRACE is not set 72 71 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/include/asm/bitops.h
··· 11 11 #endif /* _LINUX_BITOPS_H */ 12 12 13 13 #include <linux/compiler.h> 14 - #include <linux/irqflags.h> 15 14 #include <asm/barrier.h> 16 15 #include <asm/bitsperlong.h> 17 16
-4
arch/riscv/include/asm/pgtable.h
··· 124 124 #ifdef CONFIG_64BIT 125 125 #include <asm/pgtable-64.h> 126 126 127 - #define VA_USER_SV39 (UL(1) << (VA_BITS_SV39 - 1)) 128 - #define VA_USER_SV48 (UL(1) << (VA_BITS_SV48 - 1)) 129 - #define VA_USER_SV57 (UL(1) << (VA_BITS_SV57 - 1)) 130 - 131 127 #define MMAP_VA_BITS_64 ((VA_BITS >= VA_BITS_SV48) ? VA_BITS_SV48 : VA_BITS) 132 128 #define MMAP_MIN_VA_BITS_64 (VA_BITS_SV39) 133 129 #define MMAP_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_VA_BITS_64)
+8 -7
arch/riscv/kernel/Makefile
··· 3 3 # Makefile for the RISC-V Linux kernel 4 4 # 5 5 6 - ifdef CONFIG_FTRACE 7 - CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 8 - CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) 9 - CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 10 - CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 11 - endif 12 6 CFLAGS_syscall_table.o += $(call cc-disable-warning, override-init) 13 7 CFLAGS_compat_syscall_table.o += $(call cc-disable-warning, override-init) 14 8 ··· 18 24 ifdef CONFIG_FTRACE 19 25 CFLAGS_REMOVE_alternative.o = $(CC_FLAGS_FTRACE) 20 26 CFLAGS_REMOVE_cpufeature.o = $(CC_FLAGS_FTRACE) 21 - CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE) 22 27 endif 23 28 ifdef CONFIG_RELOCATABLE 24 29 CFLAGS_alternative.o += -fno-pie ··· 34 41 CFLAGS_cpufeature.o += -D__NO_FORTIFY 35 42 CFLAGS_sbi_ecall.o += -D__NO_FORTIFY 36 43 endif 44 + endif 45 + 46 + ifdef CONFIG_FTRACE 47 + CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 48 + CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) 49 + CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 50 + CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 51 + CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE) 37 52 endif 38 53 39 54 always-$(KBUILD_BUILTIN) += vmlinux.lds
+1 -1
arch/riscv/kernel/cpu_ops_sbi.c
··· 85 85 int ret; 86 86 87 87 ret = sbi_hsm_hart_stop(); 88 - pr_crit("Unable to stop the cpu %u (%d)\n", smp_processor_id(), ret); 88 + pr_crit("Unable to stop the cpu %d (%d)\n", smp_processor_id(), ret); 89 89 } 90 90 91 91 static int sbi_cpu_is_stopped(unsigned int cpuid)
+11 -12
arch/riscv/kernel/cpufeature.c
··· 301 301 RISCV_ISA_EXT_ZALRSC, 302 302 }; 303 303 304 + #define RISCV_ISA_EXT_ZKN \ 305 + RISCV_ISA_EXT_ZBKB, \ 306 + RISCV_ISA_EXT_ZBKC, \ 307 + RISCV_ISA_EXT_ZBKX, \ 308 + RISCV_ISA_EXT_ZKND, \ 309 + RISCV_ISA_EXT_ZKNE, \ 310 + RISCV_ISA_EXT_ZKNH 311 + 304 312 static const unsigned int riscv_zk_bundled_exts[] = { 305 - RISCV_ISA_EXT_ZBKB, 306 - RISCV_ISA_EXT_ZBKC, 307 - RISCV_ISA_EXT_ZBKX, 308 - RISCV_ISA_EXT_ZKND, 309 - RISCV_ISA_EXT_ZKNE, 313 + RISCV_ISA_EXT_ZKN, 310 314 RISCV_ISA_EXT_ZKR, 311 - RISCV_ISA_EXT_ZKT, 315 + RISCV_ISA_EXT_ZKT 312 316 }; 313 317 314 318 static const unsigned int riscv_zkn_bundled_exts[] = { 315 - RISCV_ISA_EXT_ZBKB, 316 - RISCV_ISA_EXT_ZBKC, 317 - RISCV_ISA_EXT_ZBKX, 318 - RISCV_ISA_EXT_ZKND, 319 - RISCV_ISA_EXT_ZKNE, 320 - RISCV_ISA_EXT_ZKNH, 319 + RISCV_ISA_EXT_ZKN 321 320 }; 322 321 323 322 static const unsigned int riscv_zks_bundled_exts[] = {
+1 -1
arch/riscv/kernel/kexec_image.c
··· 22 22 if (!h || kernel_len < sizeof(*h)) 23 23 return -EINVAL; 24 24 25 - /* According to Documentation/riscv/boot-image-header.rst, 25 + /* According to Documentation/arch/riscv/boot-image-header.rst, 26 26 * use "magic2" field to check when version >= 0.2. 27 27 */ 28 28
+2
arch/riscv/kernel/tests/kprobes/test-kprobes-asm.S
··· 181 181 182 182 #endif /* CONFIG_RISCV_ISA_C */ 183 183 184 + .section .rodata 184 185 SYM_DATA_START(test_kprobes_addresses) 185 186 RISCV_PTR test_kprobes_add_addr1 186 187 RISCV_PTR test_kprobes_add_addr2 ··· 213 212 RISCV_PTR 0 214 213 SYM_DATA_END(test_kprobes_addresses) 215 214 215 + .section .rodata 216 216 SYM_DATA_START(test_kprobes_functions) 217 217 RISCV_PTR test_kprobes_add 218 218 RISCV_PTR test_kprobes_jal
+3 -1
arch/riscv/kernel/traps.c
··· 339 339 340 340 add_random_kstack_offset(); 341 341 342 - if (syscall >= 0 && syscall < NR_syscalls) 342 + if (syscall >= 0 && syscall < NR_syscalls) { 343 + syscall = array_index_nospec(syscall, NR_syscalls); 343 344 syscall_handler(regs, syscall); 345 + } 344 346 345 347 /* 346 348 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+2 -4
arch/riscv/net/bpf_jit_comp64.c
··· 1133 1133 1134 1134 store_args(nr_arg_slots, args_off, ctx); 1135 1135 1136 - /* skip to actual body of traced function */ 1137 - if (flags & BPF_TRAMP_F_ORIG_STACK) 1138 - orig_call += RV_FENTRY_NINSNS * 4; 1139 - 1140 1136 if (flags & BPF_TRAMP_F_CALL_ORIG) { 1141 1137 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx); 1142 1138 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx); ··· 1167 1171 } 1168 1172 1169 1173 if (flags & BPF_TRAMP_F_CALL_ORIG) { 1174 + /* skip to actual body of traced function */ 1175 + orig_call += RV_FENTRY_NINSNS * 4; 1170 1176 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx); 1171 1177 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx); 1172 1178 ret = emit_call((const u64)orig_call, true, ctx);
+1 -1
arch/sh/kernel/perf_event.c
··· 7 7 * Heavily based on the x86 and PowerPC implementations. 8 8 * 9 9 * x86: 10 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 10 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 11 11 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 12 12 * Copyright (C) 2009 Jaswinder Singh Rajput 13 13 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+23
arch/sparc/kernel/pci.c
··· 181 181 182 182 __setup("ofpci_debug=", ofpci_debug); 183 183 184 + static void of_fixup_pci_pref(struct pci_dev *dev, int index, 185 + struct resource *res) 186 + { 187 + struct pci_bus_region region; 188 + 189 + if (!(res->flags & IORESOURCE_MEM_64)) 190 + return; 191 + 192 + if (!resource_size(res)) 193 + return; 194 + 195 + pcibios_resource_to_bus(dev->bus, &region, res); 196 + if (region.end <= ~((u32)0)) 197 + return; 198 + 199 + if (!(res->flags & IORESOURCE_PREFETCH)) { 200 + res->flags |= IORESOURCE_PREFETCH; 201 + pci_info(dev, "reg 0x%x: fixup: pref added to 64-bit resource\n", 202 + index); 203 + } 204 + } 205 + 184 206 static unsigned long pci_parse_of_flags(u32 addr0) 185 207 { 186 208 unsigned long flags = 0; ··· 266 244 res->end = op_res->end; 267 245 res->flags = flags; 268 246 res->name = pci_name(dev); 247 + of_fixup_pci_pref(dev, i, res); 269 248 270 249 pci_info(dev, "reg 0x%x: %pR\n", i, res); 271 250 }
+1 -1
arch/sparc/kernel/perf_event.c
··· 6 6 * This code is based almost entirely upon the x86 perf event 7 7 * code, which is: 8 8 * 9 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 9 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 10 10 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 11 11 * Copyright (C) 2009 Jaswinder Singh Rajput 12 12 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+2
arch/x86/coco/sev/Makefile
··· 8 8 # GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining 9 9 KASAN_SANITIZE_noinstr.o := n 10 10 KCSAN_SANITIZE_noinstr.o := n 11 + 12 + GCOV_PROFILE_noinstr.o := n
+1 -1
arch/x86/events/core.c
··· 1 1 /* 2 2 * Performance events x86 architecture code 3 3 * 4 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2009 Jaswinder Singh Rajput 7 7 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1 -1
arch/x86/events/perf_event.h
··· 1 1 /* 2 2 * Performance events x86 architecture header 3 3 * 4 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2009 Jaswinder Singh Rajput 7 7 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+29 -3
arch/x86/kernel/fpu/core.c
··· 319 319 #ifdef CONFIG_X86_64 320 320 void fpu_update_guest_xfd(struct fpu_guest *guest_fpu, u64 xfd) 321 321 { 322 + struct fpstate *fpstate = guest_fpu->fpstate; 323 + 322 324 fpregs_lock(); 323 - guest_fpu->fpstate->xfd = xfd; 324 - if (guest_fpu->fpstate->in_use) 325 - xfd_update_state(guest_fpu->fpstate); 325 + 326 + /* 327 + * KVM's guest ABI is that setting XFD[i]=1 *can* immediately revert the 328 + * save state to its initial configuration. Likewise, KVM_GET_XSAVE does 329 + * the same as XSAVE and returns XSTATE_BV[i]=0 whenever XFD[i]=1. 330 + * 331 + * If the guest's FPU state is in hardware, just update XFD: the XSAVE 332 + * in fpu_swap_kvm_fpstate will clear XSTATE_BV[i] whenever XFD[i]=1. 333 + * 334 + * If however the guest's FPU state is NOT resident in hardware, clear 335 + * disabled components in XSTATE_BV now, or a subsequent XRSTOR will 336 + * attempt to load disabled components and generate #NM _in the host_. 337 + */ 338 + if (xfd && test_thread_flag(TIF_NEED_FPU_LOAD)) 339 + fpstate->regs.xsave.header.xfeatures &= ~xfd; 340 + 341 + fpstate->xfd = xfd; 342 + if (fpstate->in_use) 343 + xfd_update_state(fpstate); 344 + 326 345 fpregs_unlock(); 327 346 } 328 347 EXPORT_SYMBOL_FOR_KVM(fpu_update_guest_xfd); ··· 447 428 } 448 429 449 430 if (ustate->xsave.header.xfeatures & ~xcr0) 431 + return -EINVAL; 432 + 433 + /* 434 + * Disabled features must be in their initial state, otherwise XRSTOR 435 + * causes an exception. 436 + */ 437 + if (WARN_ON_ONCE(ustate->xsave.header.xfeatures & kstate->xfd)) 450 438 return -EINVAL; 451 439 452 440 /*
+16 -3
arch/x86/kernel/kvm.c
··· 89 89 struct swait_queue_head wq; 90 90 u32 token; 91 91 int cpu; 92 + bool dummy; 92 93 }; 93 94 94 95 static struct kvm_task_sleep_head { ··· 121 120 raw_spin_lock(&b->lock); 122 121 e = _find_apf_task(b, token); 123 122 if (e) { 124 - /* dummy entry exist -> wake up was delivered ahead of PF */ 125 - hlist_del(&e->link); 123 + struct kvm_task_sleep_node *dummy = NULL; 124 + 125 + /* 126 + * The entry can either be a 'dummy' entry (which is put on the 127 + * list when wake-up happens ahead of APF handling completion) 128 + * or a token from another task which should not be touched. 129 + */ 130 + if (e->dummy) { 131 + hlist_del(&e->link); 132 + dummy = e; 133 + } 134 + 126 135 raw_spin_unlock(&b->lock); 127 - kfree(e); 136 + kfree(dummy); 128 137 return false; 129 138 } 130 139 131 140 n->token = token; 132 141 n->cpu = smp_processor_id(); 142 + n->dummy = false; 133 143 init_swait_queue_head(&n->wq); 134 144 hlist_add_head(&n->link, &b->list); 135 145 raw_spin_unlock(&b->lock); ··· 243 231 } 244 232 dummy->token = token; 245 233 dummy->cpu = smp_processor_id(); 234 + dummy->dummy = true; 246 235 init_swait_queue_head(&dummy->wq); 247 236 hlist_add_head(&dummy->link, &b->list); 248 237 dummy = NULL;
+1 -1
arch/x86/kernel/x86_init.c
··· 1 1 /* 2 - * Copyright (C) 2009 Thomas Gleixner <tglx@linutronix.de> 2 + * Copyright (C) 2009 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 * 4 4 * For licencing details see kernel-base/COPYING 5 5 */
+9
arch/x86/kvm/x86.c
··· 5807 5807 static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, 5808 5808 struct kvm_xsave *guest_xsave) 5809 5809 { 5810 + union fpregs_state *xstate = (union fpregs_state *)guest_xsave->region; 5811 + 5810 5812 if (fpstate_is_confidential(&vcpu->arch.guest_fpu)) 5811 5813 return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0; 5814 + 5815 + /* 5816 + * For backwards compatibility, do not expect disabled features to be in 5817 + * their initial state. XSTATE_BV[i] must still be cleared whenever 5818 + * XFD[i]=1, or XRSTOR would cause a #NM. 5819 + */ 5820 + xstate->xsave.header.xfeatures &= ~vcpu->arch.guest_fpu.fpstate->xfd; 5812 5821 5813 5822 return fpu_copy_uabi_to_guest_fpstate(&vcpu->arch.guest_fpu, 5814 5823 guest_xsave->region,
+1 -1
arch/x86/mm/pti.c
··· 15 15 * Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at> 16 16 * 17 17 * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com> 18 - * Mostly rewritten by Thomas Gleixner <tglx@linutronix.de> and 18 + * Mostly rewritten by Thomas Gleixner <tglx@kernel.org> and 19 19 * Andy Lutomirsky <luto@amacapital.net> 20 20 */ 21 21 #include <linux/kernel.h>
+18 -5
block/blk-integrity.c
··· 140 140 bool blk_integrity_merge_rq(struct request_queue *q, struct request *req, 141 141 struct request *next) 142 142 { 143 + struct bio_integrity_payload *bip, *bip_next; 144 + 143 145 if (blk_integrity_rq(req) == 0 && blk_integrity_rq(next) == 0) 144 146 return true; 145 147 146 148 if (blk_integrity_rq(req) == 0 || blk_integrity_rq(next) == 0) 147 149 return false; 148 150 149 - if (bio_integrity(req->bio)->bip_flags != 150 - bio_integrity(next->bio)->bip_flags) 151 + bip = bio_integrity(req->bio); 152 + bip_next = bio_integrity(next->bio); 153 + if (bip->bip_flags != bip_next->bip_flags) 154 + return false; 155 + 156 + if (bip->bip_flags & BIP_CHECK_APPTAG && 157 + bip->app_tag != bip_next->app_tag) 151 158 return false; 152 159 153 160 if (req->nr_integrity_segments + next->nr_integrity_segments > ··· 170 163 bool blk_integrity_merge_bio(struct request_queue *q, struct request *req, 171 164 struct bio *bio) 172 165 { 166 + struct bio_integrity_payload *bip, *bip_bio = bio_integrity(bio); 173 167 int nr_integrity_segs; 174 168 175 - if (blk_integrity_rq(req) == 0 && bio_integrity(bio) == NULL) 169 + if (blk_integrity_rq(req) == 0 && bip_bio == NULL) 176 170 return true; 177 171 178 - if (blk_integrity_rq(req) == 0 || bio_integrity(bio) == NULL) 172 + if (blk_integrity_rq(req) == 0 || bip_bio == NULL) 179 173 return false; 180 174 181 - if (bio_integrity(req->bio)->bip_flags != bio_integrity(bio)->bip_flags) 175 + bip = bio_integrity(req->bio); 176 + if (bip->bip_flags != bip_bio->bip_flags) 177 + return false; 178 + 179 + if (bip->bip_flags & BIP_CHECK_APPTAG && 180 + bip->app_tag != bip_bio->app_tag) 182 181 return false; 183 182 184 183 nr_integrity_segs = blk_rq_count_integrity_sg(q, bio);
+1 -2
block/blk-mq.c
··· 4553 4553 * Make sure reading the old queue_hw_ctx from other 4554 4554 * context concurrently won't trigger uaf. 4555 4555 */ 4556 - synchronize_rcu_expedited(); 4557 - kfree(hctxs); 4556 + kfree_rcu_mightsleep(hctxs); 4558 4557 hctxs = new_hctxs; 4559 4558 } 4560 4559
+9 -16
block/blk-rq-qos.h
··· 112 112 113 113 static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) 114 114 { 115 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 116 - q->rq_qos) 115 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 117 116 __rq_qos_cleanup(q->rq_qos, bio); 118 117 } 119 118 120 119 static inline void rq_qos_done(struct request_queue *q, struct request *rq) 121 120 { 122 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 123 - q->rq_qos && !blk_rq_is_passthrough(rq)) 121 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && 122 + q->rq_qos && !blk_rq_is_passthrough(rq)) 124 123 __rq_qos_done(q->rq_qos, rq); 125 124 } 126 125 127 126 static inline void rq_qos_issue(struct request_queue *q, struct request *rq) 128 127 { 129 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 130 - q->rq_qos) 128 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 131 129 __rq_qos_issue(q->rq_qos, rq); 132 130 } 133 131 134 132 static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) 135 133 { 136 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 137 - q->rq_qos) 134 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 138 135 __rq_qos_requeue(q->rq_qos, rq); 139 136 } 140 137 ··· 159 162 160 163 static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) 161 164 { 162 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 163 - q->rq_qos) { 165 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) { 164 166 bio_set_flag(bio, BIO_QOS_THROTTLED); 165 167 __rq_qos_throttle(q->rq_qos, bio); 166 168 } ··· 168 172 static inline void rq_qos_track(struct request_queue *q, struct request *rq, 169 173 struct bio *bio) 170 174 { 171 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 172 - q->rq_qos) 175 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 173 176 __rq_qos_track(q->rq_qos, rq, bio); 174 177 } 175 178 176 179 static inline void rq_qos_merge(struct request_queue *q, struct request *rq, 177 180 struct bio *bio) 178 181 { 179 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 180 - q->rq_qos) { 182 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) { 181 183 bio_set_flag(bio, BIO_QOS_MERGED); 182 184 __rq_qos_merge(q->rq_qos, rq, bio); 183 185 } ··· 183 189 184 190 static inline void rq_qos_queue_depth_changed(struct request_queue *q) 185 191 { 186 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 187 - q->rq_qos) 192 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 188 193 __rq_qos_queue_depth_changed(q->rq_qos); 189 194 } 190 195
+11 -8
drivers/acpi/pci_irq.c
··· 188 188 * the IRQ value, which is hardwired to specific interrupt inputs on 189 189 * the interrupt controller. 190 190 */ 191 - pr_debug("%04x:%02x:%02x[%c] -> %s[%d]\n", 191 + pr_debug("%04x:%02x:%02x[%c] -> %s[%u]\n", 192 192 entry->id.segment, entry->id.bus, entry->id.device, 193 193 pin_name(entry->pin), prt->source, entry->index); 194 194 ··· 384 384 int acpi_pci_irq_enable(struct pci_dev *dev) 385 385 { 386 386 struct acpi_prt_entry *entry; 387 - int gsi; 387 + u32 gsi; 388 388 u8 pin; 389 389 int triggering = ACPI_LEVEL_SENSITIVE; 390 390 /* ··· 422 422 return 0; 423 423 } 424 424 425 + rc = -ENODEV; 426 + 425 427 if (entry) { 426 428 if (entry->link) 427 - gsi = acpi_pci_link_allocate_irq(entry->link, 429 + rc = acpi_pci_link_allocate_irq(entry->link, 428 430 entry->index, 429 431 &triggering, &polarity, 430 - &link); 431 - else 432 + &link, &gsi); 433 + else { 432 434 gsi = entry->index; 433 - } else 434 - gsi = -1; 435 + rc = 0; 436 + } 437 + } 435 438 436 - if (gsi < 0) { 439 + if (rc < 0) { 437 440 /* 438 441 * No IRQ known to the ACPI subsystem - maybe the BIOS / 439 442 * driver reported one, then use it. Exit in any case.
+25 -14
drivers/acpi/pci_link.c
··· 448 448 /* >IRQ15 */ 449 449 }; 450 450 451 - static int acpi_irq_pci_sharing_penalty(int irq) 451 + static int acpi_irq_pci_sharing_penalty(u32 irq) 452 452 { 453 453 struct acpi_pci_link *link; 454 454 int penalty = 0; ··· 474 474 return penalty; 475 475 } 476 476 477 - static int acpi_irq_get_penalty(int irq) 477 + static int acpi_irq_get_penalty(u32 irq) 478 478 { 479 479 int penalty = 0; 480 480 ··· 528 528 static int acpi_pci_link_allocate(struct acpi_pci_link *link) 529 529 { 530 530 acpi_handle handle = link->device->handle; 531 - int irq; 531 + u32 irq; 532 532 int i; 533 533 534 534 if (link->irq.initialized) { ··· 598 598 return 0; 599 599 } 600 600 601 - /* 602 - * acpi_pci_link_allocate_irq 603 - * success: return IRQ >= 0 604 - * failure: return -1 601 + /** 602 + * acpi_pci_link_allocate_irq(): Retrieve a link device GSI 603 + * 604 + * @handle: Handle for the link device 605 + * @index: GSI index 606 + * @triggering: pointer to store the GSI trigger 607 + * @polarity: pointer to store GSI polarity 608 + * @name: pointer to store link device name 609 + * @gsi: pointer to store GSI number 610 + * 611 + * Returns: 612 + * 0 on success with @triggering, @polarity, @name, @gsi initialized. 613 + * -ENODEV on failure 605 614 */ 606 615 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering, 607 - int *polarity, char **name) 616 + int *polarity, char **name, u32 *gsi) 608 617 { 609 618 struct acpi_device *device = acpi_fetch_acpi_dev(handle); 610 619 struct acpi_pci_link *link; 611 620 612 621 if (!device) { 613 622 acpi_handle_err(handle, "Invalid link device\n"); 614 - return -1; 623 + return -ENODEV; 615 624 } 616 625 617 626 link = acpi_driver_data(device); 618 627 if (!link) { 619 628 acpi_handle_err(handle, "Invalid link context\n"); 620 - return -1; 629 + return -ENODEV; 621 630 } 622 631 623 632 /* TBD: Support multiple index (IRQ) entries per Link Device */ 624 633 if (index) { 625 634 acpi_handle_err(handle, "Invalid index %d\n", index); 626 - return -1; 635 + return -ENODEV; 627 636 } 628 637 629 638 mutex_lock(&acpi_link_lock); 630 639 if (acpi_pci_link_allocate(link)) { 631 640 mutex_unlock(&acpi_link_lock); 632 - return -1; 641 + return -ENODEV; 633 642 } 634 643 635 644 if (!link->irq.active) { 636 645 mutex_unlock(&acpi_link_lock); 637 646 acpi_handle_err(handle, "Link active IRQ is 0!\n"); 638 - return -1; 647 + return -ENODEV; 639 648 } 640 649 link->refcnt++; 641 650 mutex_unlock(&acpi_link_lock); ··· 656 647 if (name) 657 648 *name = acpi_device_bid(link->device); 658 649 acpi_handle_debug(handle, "Link is referenced\n"); 659 - return link->irq.active; 650 + *gsi = link->irq.active; 651 + 652 + return 0; 660 653 } 661 654 662 655 /*
-3
drivers/android/binder/page_range.rs
··· 727 727 drop(mm); 728 728 drop(page); 729 729 730 - // SAFETY: We just unlocked the lru lock, but it should be locked when we return. 731 - unsafe { bindings::spin_lock(&raw mut (*lru).lock) }; 732 - 733 730 LRU_REMOVED_ENTRY 734 731 }
+33 -12
drivers/block/loop.c
··· 1225 1225 } 1226 1226 1227 1227 static int 1228 - loop_set_status(struct loop_device *lo, const struct loop_info64 *info) 1228 + loop_set_status(struct loop_device *lo, blk_mode_t mode, 1229 + struct block_device *bdev, const struct loop_info64 *info) 1229 1230 { 1230 1231 int err; 1231 1232 bool partscan = false; 1232 1233 bool size_changed = false; 1233 1234 unsigned int memflags; 1234 1235 1236 + /* 1237 + * If we don't hold exclusive handle for the device, upgrade to it 1238 + * here to avoid changing device under exclusive owner. 1239 + */ 1240 + if (!(mode & BLK_OPEN_EXCL)) { 1241 + err = bd_prepare_to_claim(bdev, loop_set_status, NULL); 1242 + if (err) 1243 + goto out_reread_partitions; 1244 + } 1245 + 1235 1246 err = mutex_lock_killable(&lo->lo_mutex); 1236 1247 if (err) 1237 - return err; 1248 + goto out_abort_claiming; 1249 + 1238 1250 if (lo->lo_state != Lo_bound) { 1239 1251 err = -ENXIO; 1240 1252 goto out_unlock; ··· 1285 1273 } 1286 1274 out_unlock: 1287 1275 mutex_unlock(&lo->lo_mutex); 1276 + out_abort_claiming: 1277 + if (!(mode & BLK_OPEN_EXCL)) 1278 + bd_abort_claiming(bdev, loop_set_status); 1279 + out_reread_partitions: 1288 1280 if (partscan) 1289 1281 loop_reread_partitions(lo); 1290 1282 ··· 1368 1352 } 1369 1353 1370 1354 static int 1371 - loop_set_status_old(struct loop_device *lo, const struct loop_info __user *arg) 1355 + loop_set_status_old(struct loop_device *lo, blk_mode_t mode, 1356 + struct block_device *bdev, 1357 + const struct loop_info __user *arg) 1372 1358 { 1373 1359 struct loop_info info; 1374 1360 struct loop_info64 info64; ··· 1378 1360 if (copy_from_user(&info, arg, sizeof (struct loop_info))) 1379 1361 return -EFAULT; 1380 1362 loop_info64_from_old(&info, &info64); 1381 - return loop_set_status(lo, &info64); 1363 + return loop_set_status(lo, mode, bdev, &info64); 1382 1364 } 1383 1365 1384 1366 static int 1385 - loop_set_status64(struct loop_device *lo, const struct loop_info64 __user *arg) 1367 + loop_set_status64(struct loop_device *lo, blk_mode_t mode, 1368 + struct block_device *bdev, 1369 + const struct loop_info64 __user *arg) 1386 1370 { 1387 1371 struct loop_info64 info64; 1388 1372 1389 1373 if (copy_from_user(&info64, arg, sizeof (struct loop_info64))) 1390 1374 return -EFAULT; 1391 - return loop_set_status(lo, &info64); 1375 + return loop_set_status(lo, mode, bdev, &info64); 1392 1376 } 1393 1377 1394 1378 static int ··· 1569 1549 case LOOP_SET_STATUS: 1570 1550 err = -EPERM; 1571 1551 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1572 - err = loop_set_status_old(lo, argp); 1552 + err = loop_set_status_old(lo, mode, bdev, argp); 1573 1553 break; 1574 1554 case LOOP_GET_STATUS: 1575 1555 return loop_get_status_old(lo, argp); 1576 1556 case LOOP_SET_STATUS64: 1577 1557 err = -EPERM; 1578 1558 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1579 - err = loop_set_status64(lo, argp); 1559 + err = loop_set_status64(lo, mode, bdev, argp); 1580 1560 break; 1581 1561 case LOOP_GET_STATUS64: 1582 1562 return loop_get_status64(lo, argp); ··· 1670 1650 } 1671 1651 1672 1652 static int 1673 - loop_set_status_compat(struct loop_device *lo, 1674 - const struct compat_loop_info __user *arg) 1653 + loop_set_status_compat(struct loop_device *lo, blk_mode_t mode, 1654 + struct block_device *bdev, 1655 + const struct compat_loop_info __user *arg) 1675 1656 { 1676 1657 struct loop_info64 info64; 1677 1658 int ret; ··· 1680 1659 ret = loop_info64_from_compat(arg, &info64); 1681 1660 if (ret < 0) 1682 1661 return ret; 1683 - return loop_set_status(lo, &info64); 1662 + return loop_set_status(lo, mode, bdev, &info64); 1684 1663 } 1685 1664 1686 1665 static int ··· 1706 1685 1707 1686 switch(cmd) { 1708 1687 case LOOP_SET_STATUS: 1709 - err = loop_set_status_compat(lo, 1688 + err = loop_set_status_compat(lo, mode, bdev, 1710 1689 (const struct compat_loop_info __user *)arg); 1711 1690 break; 1712 1691 case LOOP_GET_STATUS:
+22 -15
drivers/block/ublk_drv.c
··· 255 255 u16 q_id, u16 tag, struct ublk_io *io, size_t offset); 256 256 static inline unsigned int ublk_req_build_flags(struct request *req); 257 257 258 - static void ublk_partition_scan_work(struct work_struct *work) 259 - { 260 - struct ublk_device *ub = 261 - container_of(work, struct ublk_device, partition_scan_work); 262 - 263 - if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 264 - &ub->ub_disk->state))) 265 - return; 266 - 267 - mutex_lock(&ub->ub_disk->open_mutex); 268 - bdev_disk_changed(ub->ub_disk, false); 269 - mutex_unlock(&ub->ub_disk->open_mutex); 270 - } 271 - 272 258 static inline struct ublksrv_io_desc * 273 259 ublk_get_iod(const struct ublk_queue *ubq, unsigned tag) 274 260 { ··· 1583 1597 put_device(disk_to_dev(disk)); 1584 1598 } 1585 1599 1600 + static void ublk_partition_scan_work(struct work_struct *work) 1601 + { 1602 + struct ublk_device *ub = 1603 + container_of(work, struct ublk_device, partition_scan_work); 1604 + /* Hold disk reference to prevent UAF during concurrent teardown */ 1605 + struct gendisk *disk = ublk_get_disk(ub); 1606 + 1607 + if (!disk) 1608 + return; 1609 + 1610 + if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 1611 + &disk->state))) 1612 + goto out; 1613 + 1614 + mutex_lock(&disk->open_mutex); 1615 + bdev_disk_changed(disk, false); 1616 + mutex_unlock(&disk->open_mutex); 1617 + out: 1618 + ublk_put_disk(disk); 1619 + } 1620 + 1586 1621 /* 1587 1622 * Use this function to ensure that ->canceling is consistently set for 1588 1623 * the device and all queues. Do not set these flags directly. ··· 2048 2041 mutex_lock(&ub->mutex); 2049 2042 ublk_stop_dev_unlocked(ub); 2050 2043 mutex_unlock(&ub->mutex); 2051 - flush_work(&ub->partition_scan_work); 2044 + cancel_work_sync(&ub->partition_scan_work); 2052 2045 ublk_cancel_dev(ub); 2053 2046 } 2054 2047
+14 -6
drivers/counter/104-quad-8.c
··· 1192 1192 { 1193 1193 struct counter_device *counter = private; 1194 1194 struct quad8 *const priv = counter_priv(counter); 1195 + struct device *dev = counter->parent; 1195 1196 unsigned int status; 1196 1197 unsigned long irq_status; 1197 1198 unsigned long channel; ··· 1201 1200 int ret; 1202 1201 1203 1202 ret = regmap_read(priv->map, QUAD8_INTERRUPT_STATUS, &status); 1204 - if (ret) 1205 - return ret; 1203 + if (ret) { 1204 + dev_WARN_ONCE(dev, true, 1205 + "Attempt to read Interrupt Status Register failed: %d\n", ret); 1206 + return IRQ_NONE; 1207 + } 1206 1208 if (!status) 1207 1209 return IRQ_NONE; 1208 1210 ··· 1227 1223 break; 1228 1224 default: 1229 1225 /* should never reach this path */ 1230 - WARN_ONCE(true, "invalid interrupt trigger function %u configured for channel %lu\n", 1231 - flg_pins, channel); 1226 + dev_WARN_ONCE(dev, true, 1227 + "invalid interrupt trigger function %u configured for channel %lu\n", 1228 + flg_pins, channel); 1232 1229 continue; 1233 1230 } 1234 1231 ··· 1237 1232 } 1238 1233 1239 1234 ret = regmap_write(priv->map, QUAD8_CHANNEL_OPERATION, CLEAR_PENDING_INTERRUPTS); 1240 - if (ret) 1241 - return ret; 1235 + if (ret) { 1236 + dev_WARN_ONCE(dev, true, 1237 + "Attempt to clear pending interrupts by writing to Channel Operation Register failed: %d\n", ret); 1238 + return IRQ_HANDLED; 1239 + } 1242 1240 1243 1241 return IRQ_HANDLED; 1244 1242 }
+1 -2
drivers/counter/interrupt-cnt.c
··· 229 229 230 230 irq_set_status_flags(priv->irq, IRQ_NOAUTOEN); 231 231 ret = devm_request_irq(dev, priv->irq, interrupt_cnt_isr, 232 - IRQF_TRIGGER_RISING | IRQF_NO_THREAD, 233 - dev_name(dev), counter); 232 + IRQF_TRIGGER_RISING, dev_name(dev), counter); 234 233 if (ret) 235 234 return ret; 236 235
-2
drivers/crypto/intel/qat/qat_common/adf_aer.c
··· 41 41 adf_error_notifier(accel_dev); 42 42 adf_pf2vf_notify_fatal_error(accel_dev); 43 43 adf_dev_restarting_notify(accel_dev); 44 - adf_pf2vf_notify_restarting(accel_dev); 45 - adf_pf2vf_wait_for_restarting_complete(accel_dev); 46 44 pci_clear_master(pdev); 47 45 adf_dev_down(accel_dev); 48 46
+3 -8
drivers/gpio/gpio-it87.c
··· 12 12 13 13 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 14 15 + #include <linux/cleanup.h> 15 16 #include <linux/init.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/module.h> ··· 242 241 mask = 1 << (gpio_num % 8); 243 242 group = (gpio_num / 8); 244 243 245 - spin_lock(&it87_gpio->lock); 244 + guard(spinlock)(&it87_gpio->lock); 246 245 247 246 rc = superio_enter(); 248 247 if (rc) 249 - goto exit; 248 + return rc; 250 249 251 250 /* set the output enable bit */ 252 251 superio_set_mask(mask, group + it87_gpio->output_base); 253 252 254 253 rc = it87_gpio_set(chip, gpio_num, val); 255 - if (rc) 256 - goto exit; 257 - 258 254 superio_exit(); 259 - 260 - exit: 261 - spin_unlock(&it87_gpio->lock); 262 255 return rc; 263 256 } 264 257
+11 -1
drivers/gpio/gpio-mpsse.c
··· 548 548 ida_free(&gpio_mpsse_ida, priv->id); 549 549 } 550 550 551 + static void gpio_mpsse_usb_put_dev(void *data) 552 + { 553 + struct mpsse_priv *priv = data; 554 + 555 + usb_put_dev(priv->udev); 556 + } 557 + 551 558 static int mpsse_init_valid_mask(struct gpio_chip *chip, 552 559 unsigned long *valid_mask, 553 560 unsigned int ngpios) ··· 599 592 INIT_LIST_HEAD(&priv->workers); 600 593 601 594 priv->udev = usb_get_dev(interface_to_usbdev(interface)); 595 + err = devm_add_action_or_reset(dev, gpio_mpsse_usb_put_dev, priv); 596 + if (err) 597 + return err; 598 + 602 599 priv->intf = interface; 603 600 priv->intf_id = interface->cur_altsetting->desc.bInterfaceNumber; 604 601 ··· 724 713 725 714 priv->intf = NULL; 726 715 usb_set_intfdata(intf, NULL); 727 - usb_put_dev(priv->udev); 728 716 } 729 717 730 718 static struct usb_driver gpio_mpsse_driver = {
+24 -1
drivers/gpio/gpio-pca953x.c
··· 943 943 DECLARE_BITMAP(old_stat, MAX_LINE); 944 944 DECLARE_BITMAP(cur_stat, MAX_LINE); 945 945 DECLARE_BITMAP(new_stat, MAX_LINE); 946 + DECLARE_BITMAP(int_stat, MAX_LINE); 946 947 DECLARE_BITMAP(trigger, MAX_LINE); 947 948 DECLARE_BITMAP(edges, MAX_LINE); 948 949 int ret; 949 950 951 + if (chip->driver_data & PCA_PCAL) { 952 + /* Read INT_STAT before it is cleared by the input-port read. */ 953 + ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, int_stat); 954 + if (ret) 955 + return false; 956 + } 957 + 950 958 ret = pca953x_read_regs(chip, chip->regs->input, cur_stat); 951 959 if (ret) 952 960 return false; 961 + 962 + if (chip->driver_data & PCA_PCAL) { 963 + /* Detect short pulses via INT_STAT. */ 964 + bitmap_and(trigger, int_stat, chip->irq_mask, gc->ngpio); 965 + 966 + /* Apply filter for rising/falling edge selection. */ 967 + bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise, 968 + cur_stat, gc->ngpio); 969 + 970 + bitmap_and(int_stat, new_stat, trigger, gc->ngpio); 971 + } else { 972 + bitmap_zero(int_stat, gc->ngpio); 973 + } 953 974 954 975 /* Remove output pins from the equation */ 955 976 pca953x_read_regs(chip, chip->regs->direction, reg_direction); ··· 985 964 986 965 if (bitmap_empty(chip->irq_trig_level_high, gc->ngpio) && 987 966 bitmap_empty(chip->irq_trig_level_low, gc->ngpio)) { 988 - if (bitmap_empty(trigger, gc->ngpio)) 967 + if (bitmap_empty(trigger, gc->ngpio) && 968 + bitmap_empty(int_stat, gc->ngpio)) 989 969 return false; 990 970 } 991 971 ··· 994 972 bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio); 995 973 bitmap_or(edges, old_stat, cur_stat, gc->ngpio); 996 974 bitmap_and(pending, edges, trigger, gc->ngpio); 975 + bitmap_or(pending, pending, int_stat, gc->ngpio); 997 976 998 977 bitmap_and(cur_stat, new_stat, chip->irq_trig_level_high, gc->ngpio); 999 978 bitmap_and(cur_stat, cur_stat, chip->irq_mask, gc->ngpio);
+1
drivers/gpio/gpio-rockchip.c
··· 593 593 gc->ngpio = bank->nr_pins; 594 594 gc->label = bank->name; 595 595 gc->parent = bank->dev; 596 + gc->can_sleep = true; 596 597 597 598 ret = gpiochip_add_data(gc, bank); 598 599 if (ret) {
+180 -71
drivers/gpio/gpiolib-shared.c
··· 38 38 int dev_id; 39 39 /* Protects the auxiliary device struct and the lookup table. */ 40 40 struct mutex lock; 41 + struct lock_class_key lock_key; 41 42 struct auxiliary_device adev; 42 43 struct gpiod_lookup_table *lookup; 44 + bool is_reset_gpio; 43 45 }; 44 46 45 47 /* Represents a single GPIO pin. */ ··· 78 76 return NULL; 79 77 } 80 78 79 + static struct gpio_shared_ref *gpio_shared_make_ref(struct fwnode_handle *fwnode, 80 + const char *con_id, 81 + enum gpiod_flags flags) 82 + { 83 + char *con_id_cpy __free(kfree) = NULL; 84 + 85 + struct gpio_shared_ref *ref __free(kfree) = kzalloc(sizeof(*ref), GFP_KERNEL); 86 + if (!ref) 87 + return NULL; 88 + 89 + if (con_id) { 90 + con_id_cpy = kstrdup(con_id, GFP_KERNEL); 91 + if (!con_id_cpy) 92 + return NULL; 93 + } 94 + 95 + ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL); 96 + if (ref->dev_id < 0) 97 + return NULL; 98 + 99 + ref->flags = flags; 100 + ref->con_id = no_free_ptr(con_id_cpy); 101 + ref->fwnode = fwnode; 102 + lockdep_register_key(&ref->lock_key); 103 + mutex_init_with_key(&ref->lock, &ref->lock_key); 104 + 105 + return no_free_ptr(ref); 106 + } 107 + 108 + static int gpio_shared_setup_reset_proxy(struct gpio_shared_entry *entry, 109 + enum gpiod_flags flags) 110 + { 111 + struct gpio_shared_ref *ref; 112 + 113 + list_for_each_entry(ref, &entry->refs, list) { 114 + if (ref->is_reset_gpio) 115 + /* Already set-up. */ 116 + return 0; 117 + } 118 + 119 + ref = gpio_shared_make_ref(NULL, "reset", flags); 120 + if (!ref) 121 + return -ENOMEM; 122 + 123 + ref->is_reset_gpio = true; 124 + 125 + list_add_tail(&ref->list, &entry->refs); 126 + 127 + pr_debug("Created a secondary shared GPIO reference for potential reset-gpio device for GPIO %u at %s\n", 128 + entry->offset, fwnode_get_name(entry->fwnode)); 129 + 130 + return 0; 131 + } 132 + 81 133 /* Handle all special nodes that we should ignore. */ 82 134 static bool gpio_shared_of_node_ignore(struct device_node *node) 83 135 { ··· 162 106 size_t con_id_len, suffix_len; 163 107 struct fwnode_handle *fwnode; 164 108 struct of_phandle_args args; 109 + struct gpio_shared_ref *ref; 165 110 struct property *prop; 166 111 unsigned int offset; 167 112 const char *suffix; ··· 195 138 196 139 for (i = 0; i < count; i++) { 197 140 struct device_node *np __free(device_node) = NULL; 141 + char *con_id __free(kfree) = NULL; 198 142 199 143 ret = of_parse_phandle_with_args(curr, prop->name, 200 144 "#gpio-cells", i, ··· 240 182 list_add_tail(&entry->list, &gpio_shared_list); 241 183 } 242 184 243 - struct gpio_shared_ref *ref __free(kfree) = 244 - kzalloc(sizeof(*ref), GFP_KERNEL); 245 - if (!ref) 246 - return -ENOMEM; 247 - 248 - ref->fwnode = fwnode_handle_get(of_fwnode_handle(curr)); 249 - ref->flags = args.args[1]; 250 - mutex_init(&ref->lock); 251 - 252 185 if (strends(prop->name, "gpios")) 253 186 suffix = "-gpios"; 254 187 else if (strends(prop->name, "gpio")) ··· 251 202 252 203 /* We only set con_id if there's actually one. */ 253 204 if (strcmp(prop->name, "gpios") && strcmp(prop->name, "gpio")) { 254 - ref->con_id = kstrdup(prop->name, GFP_KERNEL); 255 - if (!ref->con_id) 205 + con_id = kstrdup(prop->name, GFP_KERNEL); 206 + if (!con_id) 256 207 return -ENOMEM; 257 208 258 - con_id_len = strlen(ref->con_id); 209 + con_id_len = strlen(con_id); 259 210 suffix_len = strlen(suffix); 260 211 261 - ref->con_id[con_id_len - suffix_len] = '\0'; 212 + con_id[con_id_len - suffix_len] = '\0'; 262 213 } 263 214 264 - ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL); 265 - if (ref->dev_id < 0) { 266 - kfree(ref->con_id); 215 + ref = gpio_shared_make_ref(fwnode_handle_get(of_fwnode_handle(curr)), 216 + con_id, args.args[1]); 217 + if (!ref) 267 218 return -ENOMEM; 268 - } 269 219 270 220 if (!list_empty(&entry->refs)) 271 221 pr_debug("GPIO %u at %s is shared by multiple firmware nodes\n", 272 222 entry->offset, fwnode_get_name(entry->fwnode)); 273 223 274 - list_add_tail(&no_free_ptr(ref)->list, &entry->refs); 224 + list_add_tail(&ref->list, &entry->refs); 225 + 226 + if (strcmp(prop->name, "reset-gpios") == 0) { 227 + ret = gpio_shared_setup_reset_proxy(entry, args.args[1]); 228 + if (ret) 229 + return ret; 230 + } 275 231 } 276 232 } 277 233 ··· 360 306 struct fwnode_handle *reset_fwnode = dev_fwnode(consumer); 361 307 struct fwnode_reference_args ref_args, aux_args; 362 308 struct device *parent = consumer->parent; 309 + struct gpio_shared_ref *real_ref; 363 310 bool match; 364 311 int ret; 365 312 313 + lockdep_assert_held(&ref->lock); 314 + 366 315 /* The reset-gpio device must have a parent AND a firmware node. */ 367 316 if (!parent || !reset_fwnode) 368 - return false; 369 - 370 - /* 371 - * FIXME: use device_is_compatible() once the reset-gpio drivers gains 372 - * a compatible string which it currently does not have. 373 - */ 374 - if (!strstarts(dev_name(consumer), "reset.gpio.")) 375 317 return false; 376 318 377 319 /* ··· 378 328 return false; 379 329 380 330 /* 381 - * The device associated with the shared reference's firmware node is 382 - * the consumer of the reset control exposed by the reset-gpio device. 383 - * It must have a "reset-gpios" property that's referencing the entry's 384 - * firmware node. 385 - * 386 - * The reference args must agree between the real consumer and the 387 - * auxiliary reset-gpio device. 331 + * Now we need to find the actual pin we want to assign to this 332 + * reset-gpio device. To that end: iterate over the list of references 333 + * of this entry and see if there's one, whose reset-gpios property's 334 + * arguments match the ones from this consumer's node. 388 335 */ 389 - ret = fwnode_property_get_reference_args(ref->fwnode, "reset-gpios", 390 - NULL, 2, 0, &ref_args); 391 - if (ret) 392 - return false; 336 + list_for_each_entry(real_ref, &entry->refs, list) { 337 + if (real_ref == ref) 338 + continue; 393 339 394 - ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios", 395 - NULL, 2, 0, &aux_args); 396 - if (ret) { 340 + guard(mutex)(&real_ref->lock); 341 + 342 + if (!real_ref->fwnode) 343 + continue; 344 + 345 + /* 346 + * The device associated with the shared reference's firmware 347 + * node is the consumer of the reset control exposed by the 348 + * reset-gpio device. It must have a "reset-gpios" property 349 + * that's referencing the entry's firmware node. 350 + * 351 + * The reference args must agree between the real consumer and 352 + * the auxiliary reset-gpio device. 353 + */ 354 + ret = fwnode_property_get_reference_args(real_ref->fwnode, 355 + "reset-gpios", 356 + NULL, 2, 0, &ref_args); 357 + if (ret) 358 + continue; 359 + 360 + ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios", 361 + NULL, 2, 0, &aux_args); 362 + if (ret) { 363 + fwnode_handle_put(ref_args.fwnode); 364 + continue; 365 + } 366 + 367 + match = ((ref_args.fwnode == entry->fwnode) && 368 + (aux_args.fwnode == entry->fwnode) && 369 + (ref_args.args[0] == aux_args.args[0])); 370 + 397 371 fwnode_handle_put(ref_args.fwnode); 398 - return false; 372 + fwnode_handle_put(aux_args.fwnode); 373 + 374 + if (!match) 375 + continue; 376 + 377 + /* 378 + * Reuse the fwnode of the real device, next time we'll use it 379 + * in the normal path. 380 + */ 381 + ref->fwnode = fwnode_handle_get(reset_fwnode); 382 + return true; 399 383 } 400 384 401 - match = ((ref_args.fwnode == entry->fwnode) && 402 - (aux_args.fwnode == entry->fwnode) && 403 - (ref_args.args[0] == aux_args.args[0])); 404 - 405 - fwnode_handle_put(ref_args.fwnode); 406 - fwnode_handle_put(aux_args.fwnode); 407 - return match; 385 + return false; 408 386 } 409 387 #else 410 388 static bool gpio_shared_dev_is_reset_gpio(struct device *consumer, ··· 443 365 } 444 366 #endif /* CONFIG_RESET_GPIO */ 445 367 446 - int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags) 368 + int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 369 + unsigned long lflags) 447 370 { 448 371 const char *dev_id = dev_name(consumer); 372 + struct gpiod_lookup_table *lookup; 449 373 struct gpio_shared_entry *entry; 450 374 struct gpio_shared_ref *ref; 451 375 452 - struct gpiod_lookup_table *lookup __free(kfree) = 453 - kzalloc(struct_size(lookup, table, 2), GFP_KERNEL); 454 - if (!lookup) 455 - return -ENOMEM; 456 - 457 376 list_for_each_entry(entry, &gpio_shared_list, list) { 458 377 list_for_each_entry(ref, &entry->refs, list) { 459 - if (!device_match_fwnode(consumer, ref->fwnode) && 460 - !gpio_shared_dev_is_reset_gpio(consumer, entry, ref)) 461 - continue; 462 - 463 378 guard(mutex)(&ref->lock); 379 + 380 + /* 381 + * FIXME: use device_is_compatible() once the reset-gpio 382 + * drivers gains a compatible string which it currently 383 + * does not have. 384 + */ 385 + if (!ref->fwnode && strstarts(dev_name(consumer), "reset.gpio.")) { 386 + if (!gpio_shared_dev_is_reset_gpio(consumer, entry, ref)) 387 + continue; 388 + } else if (!device_match_fwnode(consumer, ref->fwnode)) { 389 + continue; 390 + } 391 + 392 + if ((!con_id && ref->con_id) || (con_id && !ref->con_id) || 393 + (con_id && ref->con_id && strcmp(con_id, ref->con_id) != 0)) 394 + continue; 464 395 465 396 /* We've already done that on a previous request. */ 466 397 if (ref->lookup) ··· 482 395 if (!key) 483 396 return -ENOMEM; 484 397 398 + lookup = kzalloc(struct_size(lookup, table, 2), GFP_KERNEL); 399 + if (!lookup) 400 + return -ENOMEM; 401 + 485 402 pr_debug("Adding machine lookup entry for a shared GPIO for consumer %s, with key '%s' and con_id '%s'\n", 486 403 dev_id, key, ref->con_id ?: "none"); 487 404 ··· 493 402 lookup->table[0] = GPIO_LOOKUP(no_free_ptr(key), 0, 494 403 ref->con_id, lflags); 495 404 496 - ref->lookup = no_free_ptr(lookup); 405 + ref->lookup = lookup; 497 406 gpiod_add_lookup_table(ref->lookup); 498 407 499 408 return 0; ··· 557 466 entry->offset, gpio_device_get_label(gdev)); 558 467 559 468 list_for_each_entry(ref, &entry->refs, list) { 560 - pr_debug("Setting up a shared GPIO entry for %s\n", 561 - fwnode_get_name(ref->fwnode)); 469 + pr_debug("Setting up a shared GPIO entry for %s (con_id: '%s')\n", 470 + fwnode_get_name(ref->fwnode) ?: "(no fwnode)", 471 + ref->con_id ?: "(none)"); 562 472 563 473 ret = gpio_shared_make_adev(gdev, entry, ref); 564 474 if (ret) ··· 579 487 if (!device_match_fwnode(&gdev->dev, entry->fwnode)) 580 488 continue; 581 489 582 - /* 583 - * For some reason if we call synchronize_srcu() in GPIO core, 584 - * descent here and take this mutex and then recursively call 585 - * synchronize_srcu() again from gpiochip_remove() (which is 586 - * totally fine) called after gpio_shared_remove_adev(), 587 - * lockdep prints a false positive deadlock splat. Disable 588 - * lockdep here. 589 - */ 590 - lockdep_off(); 591 490 list_for_each_entry(ref, &entry->refs, list) { 592 491 guard(mutex)(&ref->lock); 593 492 ··· 591 508 592 509 gpio_shared_remove_adev(&ref->adev); 593 510 } 594 - lockdep_on(); 595 511 } 596 512 } 597 513 ··· 686 604 { 687 605 list_del(&ref->list); 688 606 mutex_destroy(&ref->lock); 607 + lockdep_unregister_key(&ref->lock_key); 689 608 kfree(ref->con_id); 690 609 ida_free(&gpio_shared_ida, ref->dev_id); 691 610 fwnode_handle_put(ref->fwnode); ··· 718 635 } 719 636 } 720 637 638 + static bool gpio_shared_entry_is_really_shared(struct gpio_shared_entry *entry) 639 + { 640 + size_t num_nodes = list_count_nodes(&entry->refs); 641 + struct gpio_shared_ref *ref; 642 + 643 + if (num_nodes <= 1) 644 + return false; 645 + 646 + if (num_nodes > 2) 647 + return true; 648 + 649 + /* Exactly two references: */ 650 + list_for_each_entry(ref, &entry->refs, list) { 651 + /* 652 + * Corner-case: the second reference comes from the potential 653 + * reset-gpio instance. However, this pin is not really shared 654 + * as it would have three references in this case. Avoid 655 + * creating unnecessary proxies. 656 + */ 657 + if (ref->is_reset_gpio) 658 + return false; 659 + } 660 + 661 + return true; 662 + } 663 + 721 664 static void gpio_shared_free_exclusive(void) 722 665 { 723 666 struct gpio_shared_entry *entry, *epos; 724 667 725 668 list_for_each_entry_safe(entry, epos, &gpio_shared_list, list) { 726 - if (list_count_nodes(&entry->refs) > 1) 669 + if (gpio_shared_entry_is_really_shared(entry)) 727 670 continue; 728 671 729 672 gpio_shared_drop_ref(list_first_entry(&entry->refs,
+3 -1
drivers/gpio/gpiolib-shared.h
··· 16 16 17 17 int gpio_device_setup_shared(struct gpio_device *gdev); 18 18 void gpio_device_teardown_shared(struct gpio_device *gdev); 19 - int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags); 19 + int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 20 + unsigned long lflags); 20 21 21 22 #else 22 23 ··· 29 28 static inline void gpio_device_teardown_shared(struct gpio_device *gdev) { } 30 29 31 30 static inline int gpio_shared_add_proxy_lookup(struct device *consumer, 31 + const char *con_id, 32 32 unsigned long lflags) 33 33 { 34 34 return 0;
+79 -57
drivers/gpio/gpiolib.c
··· 1105 1105 gdev->ngpio = gc->ngpio; 1106 1106 gdev->can_sleep = gc->can_sleep; 1107 1107 1108 + rwlock_init(&gdev->line_state_lock); 1109 + RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1110 + BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1111 + 1112 + ret = init_srcu_struct(&gdev->srcu); 1113 + if (ret) 1114 + goto err_free_label; 1115 + 1116 + ret = init_srcu_struct(&gdev->desc_srcu); 1117 + if (ret) 1118 + goto err_cleanup_gdev_srcu; 1119 + 1108 1120 scoped_guard(mutex, &gpio_devices_lock) { 1109 1121 /* 1110 1122 * TODO: this allocates a Linux GPIO number base in the global ··· 1131 1119 if (base < 0) { 1132 1120 ret = base; 1133 1121 base = 0; 1134 - goto err_free_label; 1122 + goto err_cleanup_desc_srcu; 1135 1123 } 1136 1124 1137 1125 /* ··· 1151 1139 ret = gpiodev_add_to_list_unlocked(gdev); 1152 1140 if (ret) { 1153 1141 gpiochip_err(gc, "GPIO integer space overlap, cannot add chip\n"); 1154 - goto err_free_label; 1142 + goto err_cleanup_desc_srcu; 1155 1143 } 1156 1144 } 1157 - 1158 - rwlock_init(&gdev->line_state_lock); 1159 - RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1160 - BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1161 - 1162 - ret = init_srcu_struct(&gdev->srcu); 1163 - if (ret) 1164 - goto err_remove_from_list; 1165 - 1166 - ret = init_srcu_struct(&gdev->desc_srcu); 1167 - if (ret) 1168 - goto err_cleanup_gdev_srcu; 1169 1145 1170 1146 #ifdef CONFIG_PINCTRL 1171 1147 INIT_LIST_HEAD(&gdev->pin_ranges); ··· 1164 1164 1165 1165 ret = gpiochip_set_names(gc); 1166 1166 if (ret) 1167 - goto err_cleanup_desc_srcu; 1167 + goto err_remove_from_list; 1168 1168 1169 1169 ret = gpiochip_init_valid_mask(gc); 1170 1170 if (ret) 1171 - goto err_cleanup_desc_srcu; 1171 + goto err_remove_from_list; 1172 1172 1173 1173 for (desc_index = 0; desc_index < gc->ngpio; desc_index++) { 1174 1174 struct gpio_desc *desc = &gdev->descs[desc_index]; ··· 1248 1248 of_gpiochip_remove(gc); 1249 1249 err_free_valid_mask: 1250 1250 gpiochip_free_valid_mask(gc); 1251 - err_cleanup_desc_srcu: 1252 - cleanup_srcu_struct(&gdev->desc_srcu); 1253 - err_cleanup_gdev_srcu: 1254 - cleanup_srcu_struct(&gdev->srcu); 1255 1251 err_remove_from_list: 1256 1252 scoped_guard(mutex, &gpio_devices_lock) 1257 1253 list_del_rcu(&gdev->list); ··· 1257 1261 gpio_device_put(gdev); 1258 1262 goto err_print_message; 1259 1263 } 1264 + err_cleanup_desc_srcu: 1265 + cleanup_srcu_struct(&gdev->desc_srcu); 1266 + err_cleanup_gdev_srcu: 1267 + cleanup_srcu_struct(&gdev->srcu); 1260 1268 err_free_label: 1261 1269 kfree_const(gdev->label); 1262 1270 err_free_descs: ··· 4508 4508 } 4509 4509 EXPORT_SYMBOL_GPL(gpiod_remove_hogs); 4510 4510 4511 - static struct gpiod_lookup_table *gpiod_find_lookup_table(struct device *dev) 4511 + static bool gpiod_match_lookup_table(struct device *dev, 4512 + const struct gpiod_lookup_table *table) 4512 4513 { 4513 4514 const char *dev_id = dev ? dev_name(dev) : NULL; 4514 - struct gpiod_lookup_table *table; 4515 4515 4516 - list_for_each_entry(table, &gpio_lookup_list, list) { 4517 - if (table->dev_id && dev_id) { 4518 - /* 4519 - * Valid strings on both ends, must be identical to have 4520 - * a match 4521 - */ 4522 - if (!strcmp(table->dev_id, dev_id)) 4523 - return table; 4524 - } else { 4525 - /* 4526 - * One of the pointers is NULL, so both must be to have 4527 - * a match 4528 - */ 4529 - if (dev_id == table->dev_id) 4530 - return table; 4531 - } 4516 + lockdep_assert_held(&gpio_lookup_lock); 4517 + 4518 + if (table->dev_id && dev_id) { 4519 + /* 4520 + * Valid strings on both ends, must be identical to have 4521 + * a match 4522 + */ 4523 + if (!strcmp(table->dev_id, dev_id)) 4524 + return true; 4525 + } else { 4526 + /* 4527 + * One of the pointers is NULL, so both must be to have 4528 + * a match 4529 + */ 4530 + if (dev_id == table->dev_id) 4531 + return true; 4532 4532 } 4533 4533 4534 - return NULL; 4534 + return false; 4535 4535 } 4536 4536 4537 - static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id, 4538 - unsigned int idx, unsigned long *flags) 4537 + static struct gpio_desc *gpio_desc_table_match(struct device *dev, const char *con_id, 4538 + unsigned int idx, unsigned long *flags, 4539 + struct gpiod_lookup_table *table) 4539 4540 { 4540 - struct gpio_desc *desc = ERR_PTR(-ENOENT); 4541 - struct gpiod_lookup_table *table; 4541 + struct gpio_desc *desc; 4542 4542 struct gpiod_lookup *p; 4543 4543 struct gpio_chip *gc; 4544 4544 4545 - guard(mutex)(&gpio_lookup_lock); 4546 - 4547 - table = gpiod_find_lookup_table(dev); 4548 - if (!table) 4549 - return desc; 4545 + lockdep_assert_held(&gpio_lookup_lock); 4550 4546 4551 4547 for (p = &table->table[0]; p->key; p++) { 4552 4548 /* idx must always match exactly */ ··· 4596 4600 return desc; 4597 4601 } 4598 4602 4599 - return desc; 4603 + return NULL; 4604 + } 4605 + 4606 + static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id, 4607 + unsigned int idx, unsigned long *flags) 4608 + { 4609 + struct gpiod_lookup_table *table; 4610 + struct gpio_desc *desc; 4611 + 4612 + guard(mutex)(&gpio_lookup_lock); 4613 + 4614 + list_for_each_entry(table, &gpio_lookup_list, list) { 4615 + if (!gpiod_match_lookup_table(dev, table)) 4616 + continue; 4617 + 4618 + desc = gpio_desc_table_match(dev, con_id, idx, flags, table); 4619 + if (!desc) 4620 + continue; 4621 + 4622 + /* On IS_ERR() or match. */ 4623 + return desc; 4624 + } 4625 + 4626 + return ERR_PTR(-ENOENT); 4600 4627 } 4601 4628 4602 4629 static int platform_gpio_count(struct device *dev, const char *con_id) ··· 4629 4610 unsigned int count = 0; 4630 4611 4631 4612 scoped_guard(mutex, &gpio_lookup_lock) { 4632 - table = gpiod_find_lookup_table(dev); 4633 - if (!table) 4634 - return -ENOENT; 4613 + list_for_each_entry(table, &gpio_lookup_list, list) { 4614 + if (!gpiod_match_lookup_table(dev, table)) 4615 + continue; 4635 4616 4636 - for (p = &table->table[0]; p->key; p++) { 4637 - if ((con_id && p->con_id && !strcmp(con_id, p->con_id)) || 4638 - (!con_id && !p->con_id)) 4639 - count++; 4617 + for (p = &table->table[0]; p->key; p++) { 4618 + if ((con_id && p->con_id && 4619 + !strcmp(con_id, p->con_id)) || 4620 + (!con_id && !p->con_id)) 4621 + count++; 4622 + } 4640 4623 } 4641 4624 } 4642 4625 ··· 4717 4696 * lookup table for the proxy device as previously 4718 4697 * we only knew the consumer's fwnode. 4719 4698 */ 4720 - ret = gpio_shared_add_proxy_lookup(consumer, lookupflags); 4699 + ret = gpio_shared_add_proxy_lookup(consumer, con_id, 4700 + lookupflags); 4721 4701 if (ret) 4722 4702 return ERR_PTR(ret); 4723 4703
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3445 3445 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || 3446 3446 adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) 3447 3447 continue; 3448 - /* skip CG for VCE/UVD/VPE, it's handled specially */ 3448 + /* skip CG for VCE/UVD, it's handled specially */ 3449 3449 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 3450 3450 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && 3451 3451 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN && 3452 - adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VPE && 3453 3452 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG && 3454 3453 adev->ip_blocks[i].version->funcs->set_powergating_state) { 3455 3454 /* enable powergating to save power */ ··· 5865 5866 5866 5867 if (ret) 5867 5868 goto mode1_reset_failed; 5869 + 5870 + /* enable mmio access after mode 1 reset completed */ 5871 + adev->no_hw_access = false; 5868 5872 5869 5873 amdgpu_device_load_pci_state(adev->pdev); 5870 5874 ret = amdgpu_psp_wait_for_bootloader(adev);
+31 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 89 89 return seq; 90 90 } 91 91 92 + static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) 93 + { 94 + af->fence_wptr_start = af->ring->wptr; 95 + } 96 + 97 + static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) 98 + { 99 + af->fence_wptr_end = af->ring->wptr; 100 + } 101 + 92 102 /** 93 103 * amdgpu_fence_emit - emit a fence on the requested ring 94 104 * ··· 126 116 &ring->fence_drv.lock, 127 117 adev->fence_context + ring->idx, seq); 128 118 119 + amdgpu_fence_save_fence_wptr_start(af); 129 120 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 130 121 seq, flags | AMDGPU_FENCE_FLAG_INT); 122 + amdgpu_fence_save_fence_wptr_end(af); 131 123 amdgpu_fence_save_wptr(af); 132 124 pm_runtime_get_noresume(adev_to_drm(adev)->dev); 133 125 ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; ··· 721 709 struct amdgpu_ring *ring = af->ring; 722 710 unsigned long flags; 723 711 u32 seq, last_seq; 712 + bool reemitted = false; 724 713 725 714 last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 726 715 seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; ··· 739 726 if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 740 727 fence = container_of(unprocessed, struct amdgpu_fence, base); 741 728 742 - if (fence == af) 729 + if (fence->reemitted > 1) 730 + reemitted = true; 731 + else if (fence == af) 743 732 dma_fence_set_error(&fence->base, -ETIME); 744 733 else if (fence->context == af->context) 745 734 dma_fence_set_error(&fence->base, -ECANCELED); ··· 749 734 rcu_read_unlock(); 750 735 } while (last_seq != seq); 751 736 spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 752 - /* signal the guilty fence */ 753 - amdgpu_fence_write(ring, (u32)af->base.seqno); 754 - amdgpu_fence_process(ring); 737 + 738 + if (reemitted) { 739 + /* if we've already reemitted once then just cancel everything */ 740 + amdgpu_fence_driver_force_completion(af->ring); 741 + af->ring->ring_backup_entries_to_copy = 0; 742 + } 755 743 } 756 744 757 745 void amdgpu_fence_save_wptr(struct amdgpu_fence *af) ··· 802 784 /* save everything if the ring is not guilty, otherwise 803 785 * just save the content from other contexts. 804 786 */ 805 - if (!guilty_fence || (fence->context != guilty_fence->context)) 787 + if (!fence->reemitted && 788 + (!guilty_fence || (fence->context != guilty_fence->context))) { 806 789 amdgpu_ring_backup_unprocessed_command(ring, wptr, 807 790 fence->wptr); 791 + } else if (!fence->reemitted) { 792 + /* always save the fence */ 793 + amdgpu_ring_backup_unprocessed_command(ring, 794 + fence->fence_wptr_start, 795 + fence->fence_wptr_end); 796 + } 808 797 wptr = fence->wptr; 798 + fence->reemitted++; 809 799 } 810 800 rcu_read_unlock(); 811 801 } while (last_seq != seq);
+24
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.c
··· 318 318 } 319 319 EXPORT_SYMBOL(isp_kernel_buffer_free); 320 320 321 + static int isp_resume(struct amdgpu_ip_block *ip_block) 322 + { 323 + struct amdgpu_device *adev = ip_block->adev; 324 + struct amdgpu_isp *isp = &adev->isp; 325 + 326 + if (isp->funcs->hw_resume) 327 + return isp->funcs->hw_resume(isp); 328 + 329 + return -ENODEV; 330 + } 331 + 332 + static int isp_suspend(struct amdgpu_ip_block *ip_block) 333 + { 334 + struct amdgpu_device *adev = ip_block->adev; 335 + struct amdgpu_isp *isp = &adev->isp; 336 + 337 + if (isp->funcs->hw_suspend) 338 + return isp->funcs->hw_suspend(isp); 339 + 340 + return -ENODEV; 341 + } 342 + 321 343 static const struct amd_ip_funcs isp_ip_funcs = { 322 344 .name = "isp_ip", 323 345 .early_init = isp_early_init, 324 346 .hw_init = isp_hw_init, 325 347 .hw_fini = isp_hw_fini, 326 348 .is_idle = isp_is_idle, 349 + .suspend = isp_suspend, 350 + .resume = isp_resume, 327 351 .set_clockgating_state = isp_set_clockgating_state, 328 352 .set_powergating_state = isp_set_powergating_state, 329 353 };
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.h
··· 38 38 struct isp_funcs { 39 39 int (*hw_init)(struct amdgpu_isp *isp); 40 40 int (*hw_fini)(struct amdgpu_isp *isp); 41 + int (*hw_suspend)(struct amdgpu_isp *isp); 42 + int (*hw_resume)(struct amdgpu_isp *isp); 41 43 }; 42 44 43 45 struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 201 201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ? 202 202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN; 203 203 break; 204 + case AMDGPU_HW_IP_VPE: 205 + type = AMD_IP_BLOCK_TYPE_VPE; 206 + break; 204 207 default: 205 208 type = AMD_IP_BLOCK_TYPE_NUM; 206 209 break; ··· 723 720 break; 724 721 case AMD_IP_BLOCK_TYPE_UVD: 725 722 count = adev->uvd.num_uvd_inst; 723 + break; 724 + case AMD_IP_BLOCK_TYPE_VPE: 725 + count = adev->vpe.num_instances; 726 726 break; 727 727 /* For all other IP block types not listed in the switch statement 728 728 * the ip status is valid here and the instance count is one.
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 144 144 struct amdgpu_ring *ring; 145 145 ktime_t start_timestamp; 146 146 147 - /* wptr for the fence for resets */ 147 + /* wptr for the total submission for resets */ 148 148 u64 wptr; 149 149 /* fence context for resets */ 150 150 u64 context; 151 + /* has this fence been reemitted */ 152 + unsigned int reemitted; 153 + /* wptr for the fence for the submission */ 154 + u64 fence_wptr_start; 155 + u64 fence_wptr_end; 151 156 }; 152 157 153 158 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 26 26 */ 27 27 28 28 #include <linux/gpio/machine.h> 29 + #include <linux/pm_runtime.h> 29 30 #include "amdgpu.h" 30 31 #include "isp_v4_1_1.h" 31 32 ··· 146 145 return -ENODEV; 147 146 } 148 147 148 + /* The devices will be managed by the pm ops from the parent */ 149 + dev_pm_syscore_device(dev, true); 150 + 149 151 exit: 150 152 /* Continue to add */ 151 153 return 0; ··· 181 177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret); 182 178 return -ENODEV; 183 179 } 180 + dev_pm_syscore_device(dev, false); 184 181 185 182 exit: 186 183 /* Continue to remove */ 187 184 return 0; 185 + } 186 + 187 + static int isp_suspend_device(struct device *dev, void *data) 188 + { 189 + return pm_runtime_force_suspend(dev); 190 + } 191 + 192 + static int isp_resume_device(struct device *dev, void *data) 193 + { 194 + return pm_runtime_force_resume(dev); 195 + } 196 + 197 + static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp) 198 + { 199 + int r; 200 + 201 + r = device_for_each_child(isp->parent, NULL, 202 + isp_suspend_device); 203 + if (r) 204 + dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r); 205 + 206 + return r; 207 + } 208 + 209 + static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp) 210 + { 211 + int r; 212 + 213 + r = device_for_each_child(isp->parent, NULL, 214 + isp_resume_device); 215 + if (r) 216 + dev_err(isp->parent, "failed to resume hw device (%d)\n", r); 217 + 218 + return r; 188 219 } 189 220 190 221 static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp) ··· 408 369 static const struct isp_funcs isp_v4_1_1_funcs = { 409 370 .hw_init = isp_v4_1_1_hw_init, 410 371 .hw_fini = isp_v4_1_1_hw_fini, 372 + .hw_suspend = isp_v4_1_1_hw_suspend, 373 + .hw_resume = isp_v4_1_1_hw_resume, 411 374 }; 412 375 413 376 void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
+2 -2
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 763 763 return BP_RESULT_FAILURE; 764 764 765 765 return bp->cmd_tbl.dac1_encoder_control( 766 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 766 + bp, cntl->action, 767 767 cntl->pixel_clock, ATOM_DAC1_PS2); 768 768 } else if (cntl->engine_id == ENGINE_ID_DACB) { 769 769 if (!bp->cmd_tbl.dac2_encoder_control) 770 770 return BP_RESULT_FAILURE; 771 771 772 772 return bp->cmd_tbl.dac2_encoder_control( 773 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 773 + bp, cntl->action, 774 774 cntl->pixel_clock, ATOM_DAC1_PS2); 775 775 } 776 776
+35 -9
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 1797 1797 &params.ucEncodeMode)) 1798 1798 return BP_RESULT_BADINPUT; 1799 1799 1800 - params.ucDstBpc = bp_params->bit_depth; 1800 + switch (bp_params->color_depth) { 1801 + case COLOR_DEPTH_UNDEFINED: 1802 + params.ucDstBpc = PANEL_BPC_UNDEFINE; 1803 + break; 1804 + case COLOR_DEPTH_666: 1805 + params.ucDstBpc = PANEL_6BIT_PER_COLOR; 1806 + break; 1807 + default: 1808 + case COLOR_DEPTH_888: 1809 + params.ucDstBpc = PANEL_8BIT_PER_COLOR; 1810 + break; 1811 + case COLOR_DEPTH_101010: 1812 + params.ucDstBpc = PANEL_10BIT_PER_COLOR; 1813 + break; 1814 + case COLOR_DEPTH_121212: 1815 + params.ucDstBpc = PANEL_12BIT_PER_COLOR; 1816 + break; 1817 + case COLOR_DEPTH_141414: 1818 + dm_error("14-bit color not supported by SelectCRTC_Source v3\n"); 1819 + break; 1820 + case COLOR_DEPTH_161616: 1821 + params.ucDstBpc = PANEL_16BIT_PER_COLOR; 1822 + break; 1823 + } 1801 1824 1802 1825 if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params)) 1803 1826 result = BP_RESULT_OK; ··· 1838 1815 1839 1816 static enum bp_result dac1_encoder_control_v1( 1840 1817 struct bios_parser *bp, 1841 - bool enable, 1818 + enum bp_encoder_control_action action, 1842 1819 uint32_t pixel_clock, 1843 1820 uint8_t dac_standard); 1844 1821 static enum bp_result dac2_encoder_control_v1( 1845 1822 struct bios_parser *bp, 1846 - bool enable, 1823 + enum bp_encoder_control_action action, 1847 1824 uint32_t pixel_clock, 1848 1825 uint8_t dac_standard); 1849 1826 ··· 1869 1846 1870 1847 static void dac_encoder_control_prepare_params( 1871 1848 DAC_ENCODER_CONTROL_PS_ALLOCATION *params, 1872 - bool enable, 1849 + enum bp_encoder_control_action action, 1873 1850 uint32_t pixel_clock, 1874 1851 uint8_t dac_standard) 1875 1852 { 1876 1853 params->ucDacStandard = dac_standard; 1877 - if (enable) 1854 + if (action == ENCODER_CONTROL_SETUP || 1855 + action == ENCODER_CONTROL_INIT) 1856 + params->ucAction = ATOM_ENCODER_INIT; 1857 + else if (action == ENCODER_CONTROL_ENABLE) 1878 1858 params->ucAction = ATOM_ENABLE; 1879 1859 else 1880 1860 params->ucAction = ATOM_DISABLE; ··· 1890 1864 1891 1865 static enum bp_result dac1_encoder_control_v1( 1892 1866 struct bios_parser *bp, 1893 - bool enable, 1867 + enum bp_encoder_control_action action, 1894 1868 uint32_t pixel_clock, 1895 1869 uint8_t dac_standard) 1896 1870 { ··· 1899 1873 1900 1874 dac_encoder_control_prepare_params( 1901 1875 &params, 1902 - enable, 1876 + action, 1903 1877 pixel_clock, 1904 1878 dac_standard); 1905 1879 ··· 1911 1885 1912 1886 static enum bp_result dac2_encoder_control_v1( 1913 1887 struct bios_parser *bp, 1914 - bool enable, 1888 + enum bp_encoder_control_action action, 1915 1889 uint32_t pixel_clock, 1916 1890 uint8_t dac_standard) 1917 1891 { ··· 1920 1894 1921 1895 dac_encoder_control_prepare_params( 1922 1896 &params, 1923 - enable, 1897 + action, 1924 1898 pixel_clock, 1925 1899 dac_standard); 1926 1900
+2 -2
drivers/gpu/drm/amd/display/dc/bios/command_table.h
··· 57 57 struct bp_crtc_source_select *bp_params); 58 58 enum bp_result (*dac1_encoder_control)( 59 59 struct bios_parser *bp, 60 - bool enable, 60 + enum bp_encoder_control_action action, 61 61 uint32_t pixel_clock, 62 62 uint8_t dac_standard); 63 63 enum bp_result (*dac2_encoder_control)( 64 64 struct bios_parser *bp, 65 - bool enable, 65 + enum bp_encoder_control_action action, 66 66 uint32_t pixel_clock, 67 67 uint8_t dac_standard); 68 68 enum bp_result (*dac1_output_control)(
+5 -1
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 30 30 31 31 ifneq ($(CONFIG_FRAME_WARN),0) 32 32 ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y) 33 - frame_warn_limit := 3072 33 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy) 34 + frame_warn_limit := 4096 35 + else 36 + frame_warn_limit := 3072 37 + endif 34 38 else 35 39 frame_warn_limit := 2048 36 40 endif
+139 -406
drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
··· 77 77 static unsigned int dscComputeDelay( 78 78 enum output_format_class pixelFormat, 79 79 enum output_encoder_class Output); 80 - // Super monster function with some 45 argument 81 80 static bool CalculatePrefetchSchedule( 82 81 struct display_mode_lib *mode_lib, 83 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 84 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 82 + unsigned int k, 85 83 Pipe *myPipe, 86 84 unsigned int DSCDelay, 87 - double DPPCLKDelaySubtotalPlusCNVCFormater, 88 - double DPPCLKDelaySCL, 89 - double DPPCLKDelaySCLLBOnly, 90 - double DPPCLKDelayCNVCCursor, 91 - double DISPCLKDelaySubtotal, 92 85 unsigned int DPP_RECOUT_WIDTH, 93 - enum output_format_class OutputFormat, 94 - unsigned int MaxInterDCNTileRepeaters, 95 86 unsigned int VStartup, 96 87 unsigned int MaxVStartup, 97 - unsigned int GPUVMPageTableLevels, 98 - bool GPUVMEnable, 99 - bool HostVMEnable, 100 - unsigned int HostVMMaxNonCachedPageTableLevels, 101 - double HostVMMinPageSize, 102 - bool DynamicMetadataEnable, 103 - bool DynamicMetadataVMEnabled, 104 - int DynamicMetadataLinesBeforeActiveRequired, 105 - unsigned int DynamicMetadataTransmittedBytes, 106 88 double UrgentLatency, 107 89 double UrgentExtraLatency, 108 90 double TCalc, ··· 98 116 unsigned int MaxNumSwathY, 99 117 double PrefetchSourceLinesC, 100 118 unsigned int SwathWidthC, 101 - int BytePerPixelC, 102 119 double VInitPreFillC, 103 120 unsigned int MaxNumSwathC, 104 121 long swath_width_luma_ub, ··· 105 124 unsigned int SwathHeightY, 106 125 unsigned int SwathHeightC, 107 126 double TWait, 108 - bool ProgressiveToInterlaceUnitInOPP, 109 - double *DSTXAfterScaler, 110 - double *DSTYAfterScaler, 111 127 double *DestinationLinesForPrefetch, 112 128 double *PrefetchBandwidth, 113 129 double *DestinationLinesToRequestVMInVBlank, ··· 113 135 double *VRatioPrefetchC, 114 136 double *RequiredPrefetchPixDataBWLuma, 115 137 double *RequiredPrefetchPixDataBWChroma, 116 - bool *NotEnoughTimeForDynamicMetadata, 117 - double *Tno_bw, 118 - double *prefetch_vmrow_bw, 119 - double *Tdmdl_vm, 120 - double *Tdmdl, 121 - unsigned int *VUpdateOffsetPix, 122 - double *VUpdateWidthPix, 123 - double *VReadyOffsetPix); 138 + bool *NotEnoughTimeForDynamicMetadata); 124 139 static double RoundToDFSGranularityUp(double Clock, double VCOSpeed); 125 140 static double RoundToDFSGranularityDown(double Clock, double VCOSpeed); 126 141 static void CalculateDCCConfiguration( ··· 265 294 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 266 295 struct display_mode_lib *mode_lib, 267 296 unsigned int PrefetchMode, 268 - unsigned int NumberOfActivePlanes, 269 - unsigned int MaxLineBufferLines, 270 - unsigned int LineBufferSize, 271 - unsigned int DPPOutputBufferPixels, 272 - unsigned int DETBufferSizeInKByte, 273 - unsigned int WritebackInterfaceBufferSize, 274 297 double DCFCLK, 275 298 double ReturnBW, 276 - bool GPUVMEnable, 277 - unsigned int dpte_group_bytes[], 278 - unsigned int MetaChunkSize, 279 299 double UrgentLatency, 280 300 double ExtraLatency, 281 - double WritebackLatency, 282 - double WritebackChunkSize, 283 301 double SOCCLK, 284 - double DRAMClockChangeLatency, 285 - double SRExitTime, 286 - double SREnterPlusExitTime, 287 302 double DCFCLKDeepSleep, 288 303 unsigned int DPPPerPlane[], 289 - bool DCCEnable[], 290 304 double DPPCLK[], 291 305 unsigned int DETBufferSizeY[], 292 306 unsigned int DETBufferSizeC[], 293 307 unsigned int SwathHeightY[], 294 308 unsigned int SwathHeightC[], 295 - unsigned int LBBitPerPixel[], 296 309 double SwathWidthY[], 297 310 double SwathWidthC[], 298 - double HRatio[], 299 - double HRatioChroma[], 300 - unsigned int vtaps[], 301 - unsigned int VTAPsChroma[], 302 - double VRatio[], 303 - double VRatioChroma[], 304 - unsigned int HTotal[], 305 - double PixelClock[], 306 - unsigned int BlendingAndTiming[], 307 311 double BytePerPixelDETY[], 308 312 double BytePerPixelDETC[], 309 - double DSTXAfterScaler[], 310 - double DSTYAfterScaler[], 311 - bool WritebackEnable[], 312 - enum source_format_class WritebackPixelFormat[], 313 - double WritebackDestinationWidth[], 314 - double WritebackDestinationHeight[], 315 - double WritebackSourceHeight[], 316 - enum clock_change_support *DRAMClockChangeSupport, 317 - double *UrgentWatermark, 318 - double *WritebackUrgentWatermark, 319 - double *DRAMClockChangeWatermark, 320 - double *WritebackDRAMClockChangeWatermark, 321 - double *StutterExitWatermark, 322 - double *StutterEnterPlusExitWatermark, 323 - double *MinActiveDRAMClockChangeLatencySupported); 313 + enum clock_change_support *DRAMClockChangeSupport); 324 314 static void CalculateDCFCLKDeepSleep( 325 315 struct display_mode_lib *mode_lib, 326 316 unsigned int NumberOfActivePlanes, ··· 742 810 743 811 static bool CalculatePrefetchSchedule( 744 812 struct display_mode_lib *mode_lib, 745 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 746 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 813 + unsigned int k, 747 814 Pipe *myPipe, 748 815 unsigned int DSCDelay, 749 - double DPPCLKDelaySubtotalPlusCNVCFormater, 750 - double DPPCLKDelaySCL, 751 - double DPPCLKDelaySCLLBOnly, 752 - double DPPCLKDelayCNVCCursor, 753 - double DISPCLKDelaySubtotal, 754 816 unsigned int DPP_RECOUT_WIDTH, 755 - enum output_format_class OutputFormat, 756 - unsigned int MaxInterDCNTileRepeaters, 757 817 unsigned int VStartup, 758 818 unsigned int MaxVStartup, 759 - unsigned int GPUVMPageTableLevels, 760 - bool GPUVMEnable, 761 - bool HostVMEnable, 762 - unsigned int HostVMMaxNonCachedPageTableLevels, 763 - double HostVMMinPageSize, 764 - bool DynamicMetadataEnable, 765 - bool DynamicMetadataVMEnabled, 766 - int DynamicMetadataLinesBeforeActiveRequired, 767 - unsigned int DynamicMetadataTransmittedBytes, 768 819 double UrgentLatency, 769 820 double UrgentExtraLatency, 770 821 double TCalc, ··· 761 846 unsigned int MaxNumSwathY, 762 847 double PrefetchSourceLinesC, 763 848 unsigned int SwathWidthC, 764 - int BytePerPixelC, 765 849 double VInitPreFillC, 766 850 unsigned int MaxNumSwathC, 767 851 long swath_width_luma_ub, ··· 768 854 unsigned int SwathHeightY, 769 855 unsigned int SwathHeightC, 770 856 double TWait, 771 - bool ProgressiveToInterlaceUnitInOPP, 772 - double *DSTXAfterScaler, 773 - double *DSTYAfterScaler, 774 857 double *DestinationLinesForPrefetch, 775 858 double *PrefetchBandwidth, 776 859 double *DestinationLinesToRequestVMInVBlank, ··· 776 865 double *VRatioPrefetchC, 777 866 double *RequiredPrefetchPixDataBWLuma, 778 867 double *RequiredPrefetchPixDataBWChroma, 779 - bool *NotEnoughTimeForDynamicMetadata, 780 - double *Tno_bw, 781 - double *prefetch_vmrow_bw, 782 - double *Tdmdl_vm, 783 - double *Tdmdl, 784 - unsigned int *VUpdateOffsetPix, 785 - double *VUpdateWidthPix, 786 - double *VReadyOffsetPix) 868 + bool *NotEnoughTimeForDynamicMetadata) 787 869 { 870 + struct vba_vars_st *v = &mode_lib->vba; 871 + double DPPCLKDelaySubtotalPlusCNVCFormater = v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater; 788 872 bool MyError = false; 789 873 unsigned int DPPCycles = 0, DISPCLKCycles = 0; 790 874 double DSTTotalPixelsAfterScaler = 0; ··· 811 905 double Tdmec = 0; 812 906 double Tdmsks = 0; 813 907 814 - if (GPUVMEnable == true && HostVMEnable == true) { 815 - HostVMInefficiencyFactor = PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 816 - HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels; 908 + if (v->GPUVMEnable == true && v->HostVMEnable == true) { 909 + HostVMInefficiencyFactor = v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 910 + HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; 817 911 } else { 818 912 HostVMInefficiencyFactor = 1; 819 913 HostVMDynamicLevelsTrips = 0; 820 914 } 821 915 822 916 CalculateDynamicMetadataParameters( 823 - MaxInterDCNTileRepeaters, 917 + v->MaxInterDCNTileRepeaters, 824 918 myPipe->DPPCLK, 825 919 myPipe->DISPCLK, 826 920 myPipe->DCFCLKDeepSleep, 827 921 myPipe->PixelClock, 828 922 myPipe->HTotal, 829 923 myPipe->VBlank, 830 - DynamicMetadataTransmittedBytes, 831 - DynamicMetadataLinesBeforeActiveRequired, 924 + v->DynamicMetadataTransmittedBytes[k], 925 + v->DynamicMetadataLinesBeforeActiveRequired[k], 832 926 myPipe->InterlaceEnable, 833 - ProgressiveToInterlaceUnitInOPP, 927 + v->ProgressiveToInterlaceUnitInOPP, 834 928 &Tsetup, 835 929 &Tdmbf, 836 930 &Tdmec, ··· 838 932 839 933 LineTime = myPipe->HTotal / myPipe->PixelClock; 840 934 trip_to_mem = UrgentLatency; 841 - Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 935 + Tvm_trips = UrgentExtraLatency + trip_to_mem * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 842 936 843 - if (DynamicMetadataVMEnabled == true && GPUVMEnable == true) { 844 - *Tdmdl = TWait + Tvm_trips + trip_to_mem; 937 + if (v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true) { 938 + v->Tdmdl[k] = TWait + Tvm_trips + trip_to_mem; 845 939 } else { 846 - *Tdmdl = TWait + UrgentExtraLatency; 940 + v->Tdmdl[k] = TWait + UrgentExtraLatency; 847 941 } 848 942 849 - if (DynamicMetadataEnable == true) { 850 - if (VStartup * LineTime < Tsetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) { 943 + if (v->DynamicMetadataEnable[k] == true) { 944 + if (VStartup * LineTime < Tsetup + v->Tdmdl[k] + Tdmbf + Tdmec + Tdmsks) { 851 945 *NotEnoughTimeForDynamicMetadata = true; 852 946 } else { 853 947 *NotEnoughTimeForDynamicMetadata = false; ··· 855 949 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 856 950 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 857 951 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 858 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 952 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 859 953 } 860 954 } else { 861 955 *NotEnoughTimeForDynamicMetadata = false; 862 956 } 863 957 864 - *Tdmdl_vm = (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true && GPUVMEnable == true ? TWait + Tvm_trips : 0); 958 + v->Tdmdl_vm[k] = (v->DynamicMetadataEnable[k] == true && v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true ? TWait + Tvm_trips : 0); 865 959 866 960 if (myPipe->ScalerEnabled) 867 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL; 961 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCL; 868 962 else 869 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly; 963 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCLLBOnly; 870 964 871 - DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor; 965 + DPPCycles = DPPCycles + myPipe->NumberOfCursors * v->DPPCLKDelayCNVCCursor; 872 966 873 - DISPCLKCycles = DISPCLKDelaySubtotal; 967 + DISPCLKCycles = v->DISPCLKDelaySubtotal; 874 968 875 969 if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0) 876 970 return true; 877 971 878 - *DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 972 + v->DSTXAfterScaler[k] = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 879 973 + DSCDelay; 880 974 881 - *DSTXAfterScaler = *DSTXAfterScaler + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 975 + v->DSTXAfterScaler[k] = v->DSTXAfterScaler[k] + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 882 976 883 - if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && ProgressiveToInterlaceUnitInOPP)) 884 - *DSTYAfterScaler = 1; 977 + if (v->OutputFormat[k] == dm_420 || (myPipe->InterlaceEnable && v->ProgressiveToInterlaceUnitInOPP)) 978 + v->DSTYAfterScaler[k] = 1; 885 979 else 886 - *DSTYAfterScaler = 0; 980 + v->DSTYAfterScaler[k] = 0; 887 981 888 - DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler; 889 - *DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 890 - *DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal)); 982 + DSTTotalPixelsAfterScaler = v->DSTYAfterScaler[k] * myPipe->HTotal + v->DSTXAfterScaler[k]; 983 + v->DSTYAfterScaler[k] = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 984 + v->DSTXAfterScaler[k] = DSTTotalPixelsAfterScaler - ((double) (v->DSTYAfterScaler[k] * myPipe->HTotal)); 891 985 892 986 MyError = false; 893 987 ··· 896 990 Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime; 897 991 Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime; 898 992 899 - if (GPUVMEnable) { 900 - if (GPUVMPageTableLevels >= 3) { 901 - *Tno_bw = UrgentExtraLatency + trip_to_mem * ((GPUVMPageTableLevels - 2) - 1); 993 + if (v->GPUVMEnable) { 994 + if (v->GPUVMMaxPageTableLevels >= 3) { 995 + v->Tno_bw[k] = UrgentExtraLatency + trip_to_mem * ((v->GPUVMMaxPageTableLevels - 2) - 1); 902 996 } else 903 - *Tno_bw = 0; 997 + v->Tno_bw[k] = 0; 904 998 } else if (!myPipe->DCCEnable) 905 - *Tno_bw = LineTime; 999 + v->Tno_bw[k] = LineTime; 906 1000 else 907 - *Tno_bw = LineTime / 4; 1001 + v->Tno_bw[k] = LineTime / 4; 908 1002 909 - dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime 910 - - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal); 1003 + dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, v->Tdmdl[k])) / LineTime 1004 + - (v->DSTYAfterScaler[k] + v->DSTXAfterScaler[k] / myPipe->HTotal); 911 1005 dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH 912 1006 913 1007 Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC); 914 1008 Tsw_oto = Lsw_oto * LineTime; 915 1009 916 - prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) / Tsw_oto; 1010 + prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) / Tsw_oto; 917 1011 918 - if (GPUVMEnable == true) { 919 - Tvm_oto = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 1012 + if (v->GPUVMEnable == true) { 1013 + Tvm_oto = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 920 1014 Tvm_trips, 921 1015 LineTime / 4.0); 922 1016 } else 923 1017 Tvm_oto = LineTime / 4.0; 924 1018 925 - if ((GPUVMEnable == true || myPipe->DCCEnable == true)) { 1019 + if ((v->GPUVMEnable == true || myPipe->DCCEnable == true)) { 926 1020 Tr0_oto = dml_max3( 927 1021 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto, 928 1022 LineTime - Tvm_oto, LineTime / 4); ··· 948 1042 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 949 1043 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 950 1044 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 951 - dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", *Tdmdl_vm); 952 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 953 - dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", *DSTXAfterScaler); 954 - dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)*DSTYAfterScaler); 1045 + dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", v->Tdmdl_vm[k]); 1046 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 1047 + dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", v->DSTXAfterScaler[k]); 1048 + dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)v->DSTYAfterScaler[k]); 955 1049 956 1050 *PrefetchBandwidth = 0; 957 1051 *DestinationLinesToRequestVMInVBlank = 0; ··· 965 1059 double PrefetchBandwidth3 = 0; 966 1060 double PrefetchBandwidth4 = 0; 967 1061 968 - if (Tpre_rounded - *Tno_bw > 0) 1062 + if (Tpre_rounded - v->Tno_bw[k] > 0) 969 1063 PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte 970 1064 + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor 971 1065 + PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY 972 - + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 973 - / (Tpre_rounded - *Tno_bw); 1066 + + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1067 + / (Tpre_rounded - v->Tno_bw[k]); 974 1068 else 975 1069 PrefetchBandwidth1 = 0; 976 1070 977 - if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw) > 0) { 978 - PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw); 1071 + if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]) > 0) { 1072 + PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]); 979 1073 } 980 1074 981 - if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0) 1075 + if (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded > 0) 982 1076 PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * 983 1077 HostVMInefficiencyFactor + PrefetchSourceLinesY * 984 1078 swath_width_luma_ub * BytePerPixelY + 985 1079 PrefetchSourceLinesC * swath_width_chroma_ub * 986 - BytePerPixelC) / 987 - (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded); 1080 + v->BytePerPixelC[k]) / 1081 + (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded); 988 1082 else 989 1083 PrefetchBandwidth2 = 0; 990 1084 ··· 992 1086 PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * 993 1087 HostVMInefficiencyFactor + PrefetchSourceLinesY * 994 1088 swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * 995 - swath_width_chroma_ub * BytePerPixelC) / (Tpre_rounded - 1089 + swath_width_chroma_ub * v->BytePerPixelC[k]) / (Tpre_rounded - 996 1090 Tvm_trips_rounded); 997 1091 else 998 1092 PrefetchBandwidth3 = 0; ··· 1002 1096 } 1003 1097 1004 1098 if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) 1005 - PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 1099 + PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1006 1100 / (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded); 1007 1101 else 1008 1102 PrefetchBandwidth4 = 0; ··· 1013 1107 bool Case3OK; 1014 1108 1015 1109 if (PrefetchBandwidth1 > 0) { 1016 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1110 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1017 1111 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) { 1018 1112 Case1OK = true; 1019 1113 } else { ··· 1024 1118 } 1025 1119 1026 1120 if (PrefetchBandwidth2 > 0) { 1027 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1121 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1028 1122 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) { 1029 1123 Case2OK = true; 1030 1124 } else { ··· 1035 1129 } 1036 1130 1037 1131 if (PrefetchBandwidth3 > 0) { 1038 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1132 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1039 1133 < Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) { 1040 1134 Case3OK = true; 1041 1135 } else { ··· 1058 1152 dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ); 1059 1153 1060 1154 if (prefetch_bw_equ > 0) { 1061 - if (GPUVMEnable) { 1062 - Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1155 + if (v->GPUVMEnable) { 1156 + Tvm_equ = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1063 1157 } else { 1064 1158 Tvm_equ = LineTime / 4; 1065 1159 } 1066 1160 1067 - if ((GPUVMEnable || myPipe->DCCEnable)) { 1161 + if ((v->GPUVMEnable || myPipe->DCCEnable)) { 1068 1162 Tr0_equ = dml_max4( 1069 1163 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ, 1070 1164 Tr0_trips, ··· 1133 1227 } 1134 1228 1135 1229 *RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime; 1136 - *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * BytePerPixelC * swath_width_chroma_ub / LineTime; 1230 + *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * v->BytePerPixelC[k] * swath_width_chroma_ub / LineTime; 1137 1231 } else { 1138 1232 MyError = true; 1139 1233 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); ··· 1149 1243 dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1150 1244 dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1151 1245 dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime); 1152 - dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime); 1246 + dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime); 1153 1247 dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n"); 1154 - dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1248 + dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1155 1249 dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow); 1156 1250 1157 1251 } else { ··· 1182 1276 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); 1183 1277 } 1184 1278 1185 - *prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw); 1279 + v->prefetch_vmrow_bw[k] = dml_max(prefetch_vm_bw, prefetch_row_bw); 1186 1280 } 1187 1281 1188 1282 if (MyError) { ··· 2343 2437 2344 2438 v->ErrorResult[k] = CalculatePrefetchSchedule( 2345 2439 mode_lib, 2346 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 2347 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 2440 + k, 2348 2441 &myPipe, 2349 2442 v->DSCDelay[k], 2350 - v->DPPCLKDelaySubtotal 2351 - + v->DPPCLKDelayCNVCFormater, 2352 - v->DPPCLKDelaySCL, 2353 - v->DPPCLKDelaySCLLBOnly, 2354 - v->DPPCLKDelayCNVCCursor, 2355 - v->DISPCLKDelaySubtotal, 2356 2443 (unsigned int) (v->SwathWidthY[k] / v->HRatio[k]), 2357 - v->OutputFormat[k], 2358 - v->MaxInterDCNTileRepeaters, 2359 2444 dml_min(v->VStartupLines, v->MaxVStartupLines[k]), 2360 2445 v->MaxVStartupLines[k], 2361 - v->GPUVMMaxPageTableLevels, 2362 - v->GPUVMEnable, 2363 - v->HostVMEnable, 2364 - v->HostVMMaxNonCachedPageTableLevels, 2365 - v->HostVMMinPageSize, 2366 - v->DynamicMetadataEnable[k], 2367 - v->DynamicMetadataVMEnabled, 2368 - v->DynamicMetadataLinesBeforeActiveRequired[k], 2369 - v->DynamicMetadataTransmittedBytes[k], 2370 2446 v->UrgentLatency, 2371 2447 v->UrgentExtraLatency, 2372 2448 v->TCalc, ··· 2362 2474 v->MaxNumSwathY[k], 2363 2475 v->PrefetchSourceLinesC[k], 2364 2476 v->SwathWidthC[k], 2365 - v->BytePerPixelC[k], 2366 2477 v->VInitPreFillC[k], 2367 2478 v->MaxNumSwathC[k], 2368 2479 v->swath_width_luma_ub[k], ··· 2369 2482 v->SwathHeightY[k], 2370 2483 v->SwathHeightC[k], 2371 2484 TWait, 2372 - v->ProgressiveToInterlaceUnitInOPP, 2373 - &v->DSTXAfterScaler[k], 2374 - &v->DSTYAfterScaler[k], 2375 2485 &v->DestinationLinesForPrefetch[k], 2376 2486 &v->PrefetchBandwidth[k], 2377 2487 &v->DestinationLinesToRequestVMInVBlank[k], ··· 2377 2493 &v->VRatioPrefetchC[k], 2378 2494 &v->RequiredPrefetchPixDataBWLuma[k], 2379 2495 &v->RequiredPrefetchPixDataBWChroma[k], 2380 - &v->NotEnoughTimeForDynamicMetadata[k], 2381 - &v->Tno_bw[k], 2382 - &v->prefetch_vmrow_bw[k], 2383 - &v->Tdmdl_vm[k], 2384 - &v->Tdmdl[k], 2385 - &v->VUpdateOffsetPix[k], 2386 - &v->VUpdateWidthPix[k], 2387 - &v->VReadyOffsetPix[k]); 2496 + &v->NotEnoughTimeForDynamicMetadata[k]); 2388 2497 if (v->BlendingAndTiming[k] == k) { 2389 2498 double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK); 2390 2499 v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k]; ··· 2607 2730 CalculateWatermarksAndDRAMSpeedChangeSupport( 2608 2731 mode_lib, 2609 2732 PrefetchMode, 2610 - v->NumberOfActivePlanes, 2611 - v->MaxLineBufferLines, 2612 - v->LineBufferSize, 2613 - v->DPPOutputBufferPixels, 2614 - v->DETBufferSizeInKByte[0], 2615 - v->WritebackInterfaceBufferSize, 2616 2733 v->DCFCLK, 2617 2734 v->ReturnBW, 2618 - v->GPUVMEnable, 2619 - v->dpte_group_bytes, 2620 - v->MetaChunkSize, 2621 2735 v->UrgentLatency, 2622 2736 v->UrgentExtraLatency, 2623 - v->WritebackLatency, 2624 - v->WritebackChunkSize, 2625 2737 v->SOCCLK, 2626 - v->FinalDRAMClockChangeLatency, 2627 - v->SRExitTime, 2628 - v->SREnterPlusExitTime, 2629 2738 v->DCFCLKDeepSleep, 2630 2739 v->DPPPerPlane, 2631 - v->DCCEnable, 2632 2740 v->DPPCLK, 2633 2741 v->DETBufferSizeY, 2634 2742 v->DETBufferSizeC, 2635 2743 v->SwathHeightY, 2636 2744 v->SwathHeightC, 2637 - v->LBBitPerPixel, 2638 2745 v->SwathWidthY, 2639 2746 v->SwathWidthC, 2640 - v->HRatio, 2641 - v->HRatioChroma, 2642 - v->vtaps, 2643 - v->VTAPsChroma, 2644 - v->VRatio, 2645 - v->VRatioChroma, 2646 - v->HTotal, 2647 - v->PixelClock, 2648 - v->BlendingAndTiming, 2649 2747 v->BytePerPixelDETY, 2650 2748 v->BytePerPixelDETC, 2651 - v->DSTXAfterScaler, 2652 - v->DSTYAfterScaler, 2653 - v->WritebackEnable, 2654 - v->WritebackPixelFormat, 2655 - v->WritebackDestinationWidth, 2656 - v->WritebackDestinationHeight, 2657 - v->WritebackSourceHeight, 2658 - &DRAMClockChangeSupport, 2659 - &v->UrgentWatermark, 2660 - &v->WritebackUrgentWatermark, 2661 - &v->DRAMClockChangeWatermark, 2662 - &v->WritebackDRAMClockChangeWatermark, 2663 - &v->StutterExitWatermark, 2664 - &v->StutterEnterPlusExitWatermark, 2665 - &v->MinActiveDRAMClockChangeLatencySupported); 2749 + &DRAMClockChangeSupport); 2666 2750 2667 2751 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2668 2752 if (v->WritebackEnable[k] == true) { ··· 4608 4770 4609 4771 v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule( 4610 4772 mode_lib, 4611 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 4612 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 4773 + k, 4613 4774 &myPipe, 4614 4775 v->DSCDelayPerState[i][k], 4615 - v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater, 4616 - v->DPPCLKDelaySCL, 4617 - v->DPPCLKDelaySCLLBOnly, 4618 - v->DPPCLKDelayCNVCCursor, 4619 - v->DISPCLKDelaySubtotal, 4620 4776 v->SwathWidthYThisState[k] / v->HRatio[k], 4621 - v->OutputFormat[k], 4622 - v->MaxInterDCNTileRepeaters, 4623 4777 dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]), 4624 4778 v->MaximumVStartup[i][j][k], 4625 - v->GPUVMMaxPageTableLevels, 4626 - v->GPUVMEnable, 4627 - v->HostVMEnable, 4628 - v->HostVMMaxNonCachedPageTableLevels, 4629 - v->HostVMMinPageSize, 4630 - v->DynamicMetadataEnable[k], 4631 - v->DynamicMetadataVMEnabled, 4632 - v->DynamicMetadataLinesBeforeActiveRequired[k], 4633 - v->DynamicMetadataTransmittedBytes[k], 4634 4779 v->UrgLatency[i], 4635 4780 v->ExtraLatency, 4636 4781 v->TimeCalc, ··· 4627 4806 v->MaxNumSwY[k], 4628 4807 v->PrefetchLinesC[i][j][k], 4629 4808 v->SwathWidthCThisState[k], 4630 - v->BytePerPixelC[k], 4631 4809 v->PrefillC[k], 4632 4810 v->MaxNumSwC[k], 4633 4811 v->swath_width_luma_ub_this_state[k], ··· 4634 4814 v->SwathHeightYThisState[k], 4635 4815 v->SwathHeightCThisState[k], 4636 4816 v->TWait, 4637 - v->ProgressiveToInterlaceUnitInOPP, 4638 - &v->DSTXAfterScaler[k], 4639 - &v->DSTYAfterScaler[k], 4640 4817 &v->LineTimesForPrefetch[k], 4641 4818 &v->PrefetchBW[k], 4642 4819 &v->LinesForMetaPTE[k], ··· 4642 4825 &v->VRatioPreC[i][j][k], 4643 4826 &v->RequiredPrefetchPixelDataBWLuma[i][j][k], 4644 4827 &v->RequiredPrefetchPixelDataBWChroma[i][j][k], 4645 - &v->NoTimeForDynamicMetadata[i][j][k], 4646 - &v->Tno_bw[k], 4647 - &v->prefetch_vmrow_bw[k], 4648 - &v->Tdmdl_vm[k], 4649 - &v->Tdmdl[k], 4650 - &v->VUpdateOffsetPix[k], 4651 - &v->VUpdateWidthPix[k], 4652 - &v->VReadyOffsetPix[k]); 4828 + &v->NoTimeForDynamicMetadata[i][j][k]); 4653 4829 } 4654 4830 4655 4831 for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { ··· 4817 5007 CalculateWatermarksAndDRAMSpeedChangeSupport( 4818 5008 mode_lib, 4819 5009 v->PrefetchModePerState[i][j], 4820 - v->NumberOfActivePlanes, 4821 - v->MaxLineBufferLines, 4822 - v->LineBufferSize, 4823 - v->DPPOutputBufferPixels, 4824 - v->DETBufferSizeInKByte[0], 4825 - v->WritebackInterfaceBufferSize, 4826 5010 v->DCFCLKState[i][j], 4827 5011 v->ReturnBWPerState[i][j], 4828 - v->GPUVMEnable, 4829 - v->dpte_group_bytes, 4830 - v->MetaChunkSize, 4831 5012 v->UrgLatency[i], 4832 5013 v->ExtraLatency, 4833 - v->WritebackLatency, 4834 - v->WritebackChunkSize, 4835 5014 v->SOCCLKPerState[i], 4836 - v->FinalDRAMClockChangeLatency, 4837 - v->SRExitTime, 4838 - v->SREnterPlusExitTime, 4839 5015 v->ProjectedDCFCLKDeepSleep[i][j], 4840 5016 v->NoOfDPPThisState, 4841 - v->DCCEnable, 4842 5017 v->RequiredDPPCLKThisState, 4843 5018 v->DETBufferSizeYThisState, 4844 5019 v->DETBufferSizeCThisState, 4845 5020 v->SwathHeightYThisState, 4846 5021 v->SwathHeightCThisState, 4847 - v->LBBitPerPixel, 4848 5022 v->SwathWidthYThisState, 4849 5023 v->SwathWidthCThisState, 4850 - v->HRatio, 4851 - v->HRatioChroma, 4852 - v->vtaps, 4853 - v->VTAPsChroma, 4854 - v->VRatio, 4855 - v->VRatioChroma, 4856 - v->HTotal, 4857 - v->PixelClock, 4858 - v->BlendingAndTiming, 4859 5024 v->BytePerPixelInDETY, 4860 5025 v->BytePerPixelInDETC, 4861 - v->DSTXAfterScaler, 4862 - v->DSTYAfterScaler, 4863 - v->WritebackEnable, 4864 - v->WritebackPixelFormat, 4865 - v->WritebackDestinationWidth, 4866 - v->WritebackDestinationHeight, 4867 - v->WritebackSourceHeight, 4868 - &v->DRAMClockChangeSupport[i][j], 4869 - &v->UrgentWatermark, 4870 - &v->WritebackUrgentWatermark, 4871 - &v->DRAMClockChangeWatermark, 4872 - &v->WritebackDRAMClockChangeWatermark, 4873 - &v->StutterExitWatermark, 4874 - &v->StutterEnterPlusExitWatermark, 4875 - &v->MinActiveDRAMClockChangeLatencySupported); 5026 + &v->DRAMClockChangeSupport[i][j]); 4876 5027 } 4877 5028 } 4878 5029 ··· 4950 5179 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 4951 5180 struct display_mode_lib *mode_lib, 4952 5181 unsigned int PrefetchMode, 4953 - unsigned int NumberOfActivePlanes, 4954 - unsigned int MaxLineBufferLines, 4955 - unsigned int LineBufferSize, 4956 - unsigned int DPPOutputBufferPixels, 4957 - unsigned int DETBufferSizeInKByte, 4958 - unsigned int WritebackInterfaceBufferSize, 4959 5182 double DCFCLK, 4960 5183 double ReturnBW, 4961 - bool GPUVMEnable, 4962 - unsigned int dpte_group_bytes[], 4963 - unsigned int MetaChunkSize, 4964 5184 double UrgentLatency, 4965 5185 double ExtraLatency, 4966 - double WritebackLatency, 4967 - double WritebackChunkSize, 4968 5186 double SOCCLK, 4969 - double DRAMClockChangeLatency, 4970 - double SRExitTime, 4971 - double SREnterPlusExitTime, 4972 5187 double DCFCLKDeepSleep, 4973 5188 unsigned int DPPPerPlane[], 4974 - bool DCCEnable[], 4975 5189 double DPPCLK[], 4976 5190 unsigned int DETBufferSizeY[], 4977 5191 unsigned int DETBufferSizeC[], 4978 5192 unsigned int SwathHeightY[], 4979 5193 unsigned int SwathHeightC[], 4980 - unsigned int LBBitPerPixel[], 4981 5194 double SwathWidthY[], 4982 5195 double SwathWidthC[], 4983 - double HRatio[], 4984 - double HRatioChroma[], 4985 - unsigned int vtaps[], 4986 - unsigned int VTAPsChroma[], 4987 - double VRatio[], 4988 - double VRatioChroma[], 4989 - unsigned int HTotal[], 4990 - double PixelClock[], 4991 - unsigned int BlendingAndTiming[], 4992 5196 double BytePerPixelDETY[], 4993 5197 double BytePerPixelDETC[], 4994 - double DSTXAfterScaler[], 4995 - double DSTYAfterScaler[], 4996 - bool WritebackEnable[], 4997 - enum source_format_class WritebackPixelFormat[], 4998 - double WritebackDestinationWidth[], 4999 - double WritebackDestinationHeight[], 5000 - double WritebackSourceHeight[], 5001 - enum clock_change_support *DRAMClockChangeSupport, 5002 - double *UrgentWatermark, 5003 - double *WritebackUrgentWatermark, 5004 - double *DRAMClockChangeWatermark, 5005 - double *WritebackDRAMClockChangeWatermark, 5006 - double *StutterExitWatermark, 5007 - double *StutterEnterPlusExitWatermark, 5008 - double *MinActiveDRAMClockChangeLatencySupported) 5198 + enum clock_change_support *DRAMClockChangeSupport) 5009 5199 { 5200 + struct vba_vars_st *v = &mode_lib->vba; 5010 5201 double EffectiveLBLatencyHidingY = 0; 5011 5202 double EffectiveLBLatencyHidingC = 0; 5012 5203 double LinesInDETY[DC__NUM_DPP__MAX] = { 0 }; ··· 4987 5254 double WritebackDRAMClockChangeLatencyHiding = 0; 4988 5255 unsigned int k, j; 4989 5256 4990 - mode_lib->vba.TotalActiveDPP = 0; 4991 - mode_lib->vba.TotalDCCActiveDPP = 0; 4992 - for (k = 0; k < NumberOfActivePlanes; ++k) { 4993 - mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + DPPPerPlane[k]; 4994 - if (DCCEnable[k] == true) { 4995 - mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP + DPPPerPlane[k]; 5257 + v->TotalActiveDPP = 0; 5258 + v->TotalDCCActiveDPP = 0; 5259 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5260 + v->TotalActiveDPP = v->TotalActiveDPP + DPPPerPlane[k]; 5261 + if (v->DCCEnable[k] == true) { 5262 + v->TotalDCCActiveDPP = v->TotalDCCActiveDPP + DPPPerPlane[k]; 4996 5263 } 4997 5264 } 4998 5265 4999 - *UrgentWatermark = UrgentLatency + ExtraLatency; 5266 + v->UrgentWatermark = UrgentLatency + ExtraLatency; 5000 5267 5001 - *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark; 5268 + v->DRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->UrgentWatermark; 5002 5269 5003 - mode_lib->vba.TotalActiveWriteback = 0; 5004 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5005 - if (WritebackEnable[k] == true) { 5006 - mode_lib->vba.TotalActiveWriteback = mode_lib->vba.TotalActiveWriteback + 1; 5270 + v->TotalActiveWriteback = 0; 5271 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5272 + if (v->WritebackEnable[k] == true) { 5273 + v->TotalActiveWriteback = v->TotalActiveWriteback + 1; 5007 5274 } 5008 5275 } 5009 5276 5010 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5011 - *WritebackUrgentWatermark = WritebackLatency; 5277 + if (v->TotalActiveWriteback <= 1) { 5278 + v->WritebackUrgentWatermark = v->WritebackLatency; 5012 5279 } else { 5013 - *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5280 + v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5014 5281 } 5015 5282 5016 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5017 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency; 5283 + if (v->TotalActiveWriteback <= 1) { 5284 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency; 5018 5285 } else { 5019 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5286 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5020 5287 } 5021 5288 5022 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5289 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5023 5290 5024 - mode_lib->vba.LBLatencyHidingSourceLinesY = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1); 5291 + v->LBLatencyHidingSourceLinesY = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1); 5025 5292 5026 - mode_lib->vba.LBLatencyHidingSourceLinesC = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1); 5293 + v->LBLatencyHidingSourceLinesC = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1); 5027 5294 5028 - EffectiveLBLatencyHidingY = mode_lib->vba.LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]); 5295 + EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]); 5029 5296 5030 - EffectiveLBLatencyHidingC = mode_lib->vba.LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]); 5297 + EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]); 5031 5298 5032 5299 LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k]; 5033 5300 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5034 - FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5301 + FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5035 5302 if (BytePerPixelDETC[k] > 0) { 5036 - LinesInDETC = mode_lib->vba.DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5303 + LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5037 5304 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5038 - FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k]; 5305 + FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k]; 5039 5306 } else { 5040 5307 LinesInDETC = 0; 5041 5308 FullDETBufferingTimeC = 999999; 5042 5309 } 5043 5310 5044 - ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5311 + ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5045 5312 5046 - if (NumberOfActivePlanes > 1) { 5047 - ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k]; 5313 + if (v->NumberOfActivePlanes > 1) { 5314 + ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k]; 5048 5315 } 5049 5316 5050 5317 if (BytePerPixelDETC[k] > 0) { 5051 - ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5318 + ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5052 5319 5053 - if (NumberOfActivePlanes > 1) { 5054 - ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k]; 5320 + if (v->NumberOfActivePlanes > 1) { 5321 + ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k]; 5055 5322 } 5056 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5323 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5057 5324 } else { 5058 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5325 + v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5059 5326 } 5060 5327 5061 - if (WritebackEnable[k] == true) { 5328 + if (v->WritebackEnable[k] == true) { 5062 5329 5063 - WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4); 5064 - if (WritebackPixelFormat[k] == dm_444_64) { 5330 + WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4); 5331 + if (v->WritebackPixelFormat[k] == dm_444_64) { 5065 5332 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5066 5333 } 5067 - if (mode_lib->vba.WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5334 + if (v->WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5068 5335 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2; 5069 5336 } 5070 - WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - mode_lib->vba.WritebackDRAMClockChangeWatermark; 5071 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5337 + WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark; 5338 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(v->ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5072 5339 } 5073 5340 } 5074 5341 5075 - mode_lib->vba.MinActiveDRAMClockChangeMargin = 999999; 5342 + v->MinActiveDRAMClockChangeMargin = 999999; 5076 5343 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5077 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5078 - if (mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < mode_lib->vba.MinActiveDRAMClockChangeMargin) { 5079 - mode_lib->vba.MinActiveDRAMClockChangeMargin = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5080 - if (BlendingAndTiming[k] == k) { 5344 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5345 + if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) { 5346 + v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k]; 5347 + if (v->BlendingAndTiming[k] == k) { 5081 5348 PlaneWithMinActiveDRAMClockChangeMargin = k; 5082 5349 } else { 5083 - for (j = 0; j < NumberOfActivePlanes; ++j) { 5084 - if (BlendingAndTiming[k] == j) { 5350 + for (j = 0; j < v->NumberOfActivePlanes; ++j) { 5351 + if (v->BlendingAndTiming[k] == j) { 5085 5352 PlaneWithMinActiveDRAMClockChangeMargin = j; 5086 5353 } 5087 5354 } ··· 5089 5356 } 5090 5357 } 5091 5358 5092 - *MinActiveDRAMClockChangeLatencySupported = mode_lib->vba.MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency; 5359 + v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->FinalDRAMClockChangeLatency; 5093 5360 5094 5361 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5095 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5096 - if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5097 - SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5362 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5363 + if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5364 + SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k]; 5098 5365 } 5099 5366 } 5100 5367 5101 - mode_lib->vba.TotalNumberOfActiveOTG = 0; 5102 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5103 - if (BlendingAndTiming[k] == k) { 5104 - mode_lib->vba.TotalNumberOfActiveOTG = mode_lib->vba.TotalNumberOfActiveOTG + 1; 5368 + v->TotalNumberOfActiveOTG = 0; 5369 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5370 + if (v->BlendingAndTiming[k] == k) { 5371 + v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1; 5105 5372 } 5106 5373 } 5107 5374 5108 - if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) { 5375 + if (v->MinActiveDRAMClockChangeMargin > 0) { 5109 5376 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5110 - } else if (((mode_lib->vba.SynchronizedVBlank == true || mode_lib->vba.TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5377 + } else if (((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5111 5378 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5112 5379 } else { 5113 5380 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5114 5381 } 5115 5382 5116 5383 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0]; 5117 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5384 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5118 5385 if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) { 5119 5386 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k]; 5120 - TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5387 + TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5121 5388 } 5122 5389 } 5123 5390 5124 - *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5125 - *StutterEnterPlusExitWatermark = dml_max(SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5391 + v->StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5392 + v->StutterEnterPlusExitWatermark = dml_max(v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5126 5393 5127 5394 } 5128 5395
+1 -27
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1610 1610 struct dc_bios *bios = link->ctx->dc_bios; 1611 1611 struct bp_crtc_source_select crtc_source_select = {0}; 1612 1612 enum engine_id engine_id = link->link_enc->preferred_engine; 1613 - uint8_t bit_depth; 1614 1613 1615 1614 if (dc_is_rgb_signal(pipe_ctx->stream->signal)) 1616 1615 engine_id = link->link_enc->analog_engine; 1617 1616 1618 - switch (pipe_ctx->stream->timing.display_color_depth) { 1619 - case COLOR_DEPTH_UNDEFINED: 1620 - bit_depth = 0; 1621 - break; 1622 - case COLOR_DEPTH_666: 1623 - bit_depth = 6; 1624 - break; 1625 - default: 1626 - case COLOR_DEPTH_888: 1627 - bit_depth = 8; 1628 - break; 1629 - case COLOR_DEPTH_101010: 1630 - bit_depth = 10; 1631 - break; 1632 - case COLOR_DEPTH_121212: 1633 - bit_depth = 12; 1634 - break; 1635 - case COLOR_DEPTH_141414: 1636 - bit_depth = 14; 1637 - break; 1638 - case COLOR_DEPTH_161616: 1639 - bit_depth = 16; 1640 - break; 1641 - } 1642 - 1643 1617 crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst; 1644 - crtc_source_select.bit_depth = bit_depth; 1618 + crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth; 1645 1619 crtc_source_select.engine_id = engine_id; 1646 1620 crtc_source_select.sink_signal = pipe_ctx->stream->signal; 1647 1621
+1 -1
drivers/gpu/drm/amd/display/include/bios_parser_types.h
··· 136 136 enum engine_id engine_id; 137 137 enum controller_id controller_id; 138 138 enum signal_type sink_signal; 139 - uint8_t bit_depth; 139 + enum dc_color_depth color_depth; 140 140 }; 141 141 142 142 struct bp_transmitter_control {
+15 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 2455 2455 } 2456 2456 2457 2457 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2458 - if (pptable->PcieGenSpeed[i] > pcie_gen_cap || 2459 - pptable->PcieLaneCount[i] > pcie_width_cap) { 2460 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2461 - pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2462 - pcie_gen_cap : pptable->PcieGenSpeed[i]; 2463 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2464 - pptable->PcieLaneCount[i] > pcie_width_cap ? 2465 - pcie_width_cap : pptable->PcieLaneCount[i]; 2466 - smu_pcie_arg = i << 16; 2467 - smu_pcie_arg |= pcie_gen_cap << 8; 2468 - smu_pcie_arg |= pcie_width_cap; 2469 - ret = smu_cmn_send_smc_msg_with_param(smu, 2470 - SMU_MSG_OverridePcieParameters, 2471 - smu_pcie_arg, 2472 - NULL); 2473 - if (ret) 2474 - break; 2475 - } 2458 + dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2459 + pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2460 + pcie_gen_cap : pptable->PcieGenSpeed[i]; 2461 + dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2462 + pptable->PcieLaneCount[i] > pcie_width_cap ? 2463 + pcie_width_cap : pptable->PcieLaneCount[i]; 2464 + smu_pcie_arg = i << 16; 2465 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_gen[i] << 8; 2466 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_lane[i]; 2467 + ret = smu_cmn_send_smc_msg_with_param(smu, 2468 + SMU_MSG_OverridePcieParameters, 2469 + smu_pcie_arg, 2470 + NULL); 2471 + if (ret) 2472 + return ret; 2476 2473 } 2477 2474 2478 2475 return ret;
+6 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2923 2923 break; 2924 2924 } 2925 2925 2926 - if (!ret) 2926 + if (!ret) { 2927 + /* disable mmio access while doing mode 1 reset*/ 2928 + smu->adev->no_hw_access = true; 2929 + /* ensure no_hw_access is globally visible before any MMIO */ 2930 + smp_mb(); 2927 2931 msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS); 2932 + } 2928 2933 2929 2934 return ret; 2930 2935 }
+7 -2
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2143 2143 2144 2144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset); 2145 2145 if (!ret) { 2146 - if (amdgpu_emu_mode == 1) 2146 + if (amdgpu_emu_mode == 1) { 2147 2147 msleep(50000); 2148 - else 2148 + } else { 2149 + /* disable mmio access while doing mode 1 reset*/ 2150 + smu->adev->no_hw_access = true; 2151 + /* ensure no_hw_access is globally visible before any MMIO */ 2152 + smp_mb(); 2149 2153 msleep(1000); 2154 + } 2150 2155 } 2151 2156 2152 2157 return ret;
+99 -23
drivers/gpu/drm/drm_atomic_helper.c
··· 1162 1162 new_state->self_refresh_active; 1163 1163 } 1164 1164 1165 - static void 1166 - encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state) 1165 + /** 1166 + * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder 1167 + * @dev: DRM device 1168 + * @state: the driver state object 1169 + * 1170 + * Loops over all connectors in the current state and if the CRTC needs 1171 + * it, disables the bridge chain all the way, then disables the encoder 1172 + * afterwards. 1173 + */ 1174 + void 1175 + drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 1176 + struct drm_atomic_state *state) 1167 1177 { 1168 1178 struct drm_connector *connector; 1169 1179 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1239 1229 } 1240 1230 } 1241 1231 } 1232 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable); 1242 1233 1243 - static void 1244 - crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1234 + /** 1235 + * drm_atomic_helper_commit_crtc_disable - disable CRTSs 1236 + * @dev: DRM device 1237 + * @state: the driver state object 1238 + * 1239 + * Loops over all CRTCs in the current state and if the CRTC needs 1240 + * it, disables it. 1241 + */ 1242 + void 1243 + drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1245 1244 { 1246 1245 struct drm_crtc *crtc; 1247 1246 struct drm_crtc_state *old_crtc_state, *new_crtc_state; ··· 1301 1282 drm_crtc_vblank_put(crtc); 1302 1283 } 1303 1284 } 1285 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable); 1304 1286 1305 - static void 1306 - encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1287 + /** 1288 + * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges 1289 + * @dev: DRM device 1290 + * @state: the driver state object 1291 + * 1292 + * Loops over all connectors in the current state and if the CRTC needs 1293 + * it, post-disables all encoder bridges. 1294 + */ 1295 + void 1296 + drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1307 1297 { 1308 1298 struct drm_connector *connector; 1309 1299 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1363 1335 drm_bridge_put(bridge); 1364 1336 } 1365 1337 } 1338 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable); 1366 1339 1367 1340 static void 1368 1341 disable_outputs(struct drm_device *dev, struct drm_atomic_state *state) 1369 1342 { 1370 - encoder_bridge_disable(dev, state); 1343 + drm_atomic_helper_commit_encoder_bridge_disable(dev, state); 1371 1344 1372 - crtc_disable(dev, state); 1345 + drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state); 1373 1346 1374 - encoder_bridge_post_disable(dev, state); 1347 + drm_atomic_helper_commit_crtc_disable(dev, state); 1375 1348 } 1376 1349 1377 1350 /** ··· 1475 1446 } 1476 1447 EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants); 1477 1448 1478 - static void 1479 - crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1449 + /** 1450 + * drm_atomic_helper_commit_crtc_set_mode - set the new mode 1451 + * @dev: DRM device 1452 + * @state: the driver state object 1453 + * 1454 + * Loops over all connectors in the current state and if the mode has 1455 + * changed, change the mode of the CRTC, then call down the bridge 1456 + * chain and change the mode in all bridges as well. 1457 + */ 1458 + void 1459 + drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1480 1460 { 1481 1461 struct drm_crtc *crtc; 1482 1462 struct drm_crtc_state *new_crtc_state; ··· 1546 1508 drm_bridge_put(bridge); 1547 1509 } 1548 1510 } 1511 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode); 1549 1512 1550 1513 /** 1551 1514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs ··· 1570 1531 drm_atomic_helper_update_legacy_modeset_state(dev, state); 1571 1532 drm_atomic_helper_calc_timestamping_constants(state); 1572 1533 1573 - crtc_set_mode(dev, state); 1534 + drm_atomic_helper_commit_crtc_set_mode(dev, state); 1574 1535 } 1575 1536 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1576 1537 1577 - static void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1578 - struct drm_atomic_state *state) 1538 + /** 1539 + * drm_atomic_helper_commit_writebacks - issue writebacks 1540 + * @dev: DRM device 1541 + * @state: atomic state object being committed 1542 + * 1543 + * This loops over the connectors, checks if the new state requires 1544 + * a writeback job to be issued and in that case issues an atomic 1545 + * commit on each connector. 1546 + */ 1547 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1548 + struct drm_atomic_state *state) 1579 1549 { 1580 1550 struct drm_connector *connector; 1581 1551 struct drm_connector_state *new_conn_state; ··· 1603 1555 } 1604 1556 } 1605 1557 } 1558 + EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks); 1606 1559 1607 - static void 1608 - encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1560 + /** 1561 + * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges 1562 + * @dev: DRM device 1563 + * @state: atomic state object being committed 1564 + * 1565 + * This loops over the connectors and if the CRTC needs it, pre-enables 1566 + * the entire bridge chain. 1567 + */ 1568 + void 1569 + drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1609 1570 { 1610 1571 struct drm_connector *connector; 1611 1572 struct drm_connector_state *new_conn_state; ··· 1645 1588 drm_bridge_put(bridge); 1646 1589 } 1647 1590 } 1591 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable); 1648 1592 1649 - static void 1650 - crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1593 + /** 1594 + * drm_atomic_helper_commit_crtc_enable - enables the CRTCs 1595 + * @dev: DRM device 1596 + * @state: atomic state object being committed 1597 + * 1598 + * This loops over CRTCs in the new state, and of the CRTC needs 1599 + * it, enables it. 1600 + */ 1601 + void 1602 + drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1651 1603 { 1652 1604 struct drm_crtc *crtc; 1653 1605 struct drm_crtc_state *old_crtc_state; ··· 1685 1619 } 1686 1620 } 1687 1621 } 1622 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable); 1688 1623 1689 - static void 1690 - encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1624 + /** 1625 + * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges 1626 + * @dev: DRM device 1627 + * @state: atomic state object being committed 1628 + * 1629 + * This loops over all connectors in the new state, and of the CRTC needs 1630 + * it, enables the entire bridge chain. 1631 + */ 1632 + void 1633 + drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1691 1634 { 1692 1635 struct drm_connector *connector; 1693 1636 struct drm_connector_state *new_conn_state; ··· 1739 1664 drm_bridge_put(bridge); 1740 1665 } 1741 1666 } 1667 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable); 1742 1668 1743 1669 /** 1744 1670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs ··· 1758 1682 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 1759 1683 struct drm_atomic_state *state) 1760 1684 { 1761 - encoder_bridge_pre_enable(dev, state); 1685 + drm_atomic_helper_commit_crtc_enable(dev, state); 1762 1686 1763 - crtc_enable(dev, state); 1687 + drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state); 1764 1688 1765 - encoder_bridge_enable(dev, state); 1689 + drm_atomic_helper_commit_encoder_bridge_enable(dev, state); 1766 1690 1767 1691 drm_atomic_helper_commit_writebacks(dev, state); 1768 1692 }
+10
drivers/gpu/drm/drm_fb_helper.c
··· 366 366 { 367 367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work); 368 368 369 + if (helper->info->state != FBINFO_STATE_RUNNING) 370 + return; 371 + 369 372 drm_fb_helper_fb_dirty(helper); 370 373 } 371 374 ··· 734 731 if (suspend) { 735 732 if (fb_helper->info->state != FBINFO_STATE_RUNNING) 736 733 return; 734 + 735 + /* 736 + * Cancel pending damage work. During GPU reset, VBlank 737 + * interrupts are disabled and drm_fb_helper_fb_dirty() 738 + * would wait for VBlank timeout otherwise. 739 + */ 740 + cancel_work_sync(&fb_helper->damage_work); 737 741 738 742 console_lock(); 739 743
+1 -1
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 1692 1692 { 1693 1693 struct hdmi_context *hdata = arg; 1694 1694 1695 - mod_delayed_work(system_wq, &hdata->hotplug_work, 1695 + mod_delayed_work(system_percpu_wq, &hdata->hotplug_work, 1696 1696 msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); 1697 1697 1698 1698 return IRQ_HANDLED;
-6
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1002 1002 return PTR_ERR(dsi->next_bridge); 1003 1003 } 1004 1004 1005 - /* 1006 - * set flag to request the DSI host bridge be pre-enabled before device bridge 1007 - * in the chain, so the DSI host is ready when the device bridge is pre-enabled 1008 - */ 1009 - dsi->next_bridge->pre_enable_prev_first = true; 1010 - 1011 1005 drm_bridge_add(&dsi->bridge); 1012 1006 1013 1007 ret = component_add(host->dev, &mtk_dsi_component_ops);
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ad102.c
··· 30 30 31 31 .booter.ctor = ga102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 337 337 } 338 338 339 339 int 340 - nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 340 + nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp) 341 341 { 342 342 return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb", 343 343 NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB); 344 - } 345 - 346 - void 347 - nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 348 - { 349 - nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 350 344 } 351 345 352 346 int
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga100.c
··· 47 47 48 48 .booter.ctor = tu102_gsp_booter_ctor, 49 49 50 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 51 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 52 + 50 53 .dtor = r535_gsp_dtor, 51 54 .oneinit = tu102_gsp_oneinit, 52 55 .init = tu102_gsp_init,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga102.c
··· 158 158 159 159 .booter.ctor = ga102_gsp_booter_ctor, 160 160 161 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 162 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 163 + 161 164 .dtor = r535_gsp_dtor, 162 165 .oneinit = tu102_gsp_oneinit, 163 166 .init = tu102_gsp_init,
+21 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
··· 7 7 8 8 int nvkm_gsp_fwsec_frts(struct nvkm_gsp *); 9 9 10 - int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 11 10 int nvkm_gsp_fwsec_sb(struct nvkm_gsp *); 12 - void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 11 + int nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp); 13 12 14 13 struct nvkm_gsp_fwif { 15 14 int version; ··· 51 52 struct nvkm_falcon *, struct nvkm_falcon_fw *); 52 53 } booter; 53 54 55 + struct { 56 + int (*ctor)(struct nvkm_gsp *); 57 + void (*dtor)(struct nvkm_gsp *); 58 + } fwsec_sb; 59 + 54 60 void (*dtor)(struct nvkm_gsp *); 55 61 int (*oneinit)(struct nvkm_gsp *); 56 62 int (*init)(struct nvkm_gsp *); ··· 71 67 extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec; 72 68 int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *, 73 69 struct nvkm_falcon *, struct nvkm_falcon_fw *); 70 + int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 71 + void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 74 72 int tu102_gsp_oneinit(struct nvkm_gsp *); 75 73 int tu102_gsp_init(struct nvkm_gsp *); 76 74 int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); ··· 96 90 97 91 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 98 92 struct nvkm_gsp **); 93 + 94 + static inline int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 95 + { 96 + if (gsp->func->fwsec_sb.ctor) 97 + return gsp->func->fwsec_sb.ctor(gsp); 98 + return 0; 99 + } 100 + 101 + static inline void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 102 + { 103 + if (gsp->func->fwsec_sb.dtor) 104 + gsp->func->fwsec_sb.dtor(gsp); 105 + } 99 106 100 107 extern const struct nvkm_gsp_func gv100_gsp; 101 108 #endif
+15
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
··· 30 30 #include <nvfw/fw.h> 31 31 #include <nvfw/hs.h> 32 32 33 + int 34 + tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 35 + { 36 + return nvkm_gsp_fwsec_sb_init(gsp); 37 + } 38 + 39 + void 40 + tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 41 + { 42 + nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 43 + } 44 + 33 45 static int 34 46 tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1) 35 47 { ··· 381 369 .sig_section = ".fwsignature_tu10x", 382 370 383 371 .booter.ctor = tu102_gsp_booter_ctor, 372 + 373 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 374 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 384 375 385 376 .dtor = r535_gsp_dtor, 386 377 .oneinit = tu102_gsp_oneinit,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu116.c
··· 30 30 31 31 .booter.ctor = tu102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -1
drivers/gpu/drm/pl111/pl111_drv.c
··· 295 295 variant->name, priv); 296 296 if (ret != 0) { 297 297 dev_err(dev, "%s failed irq %d\n", __func__, ret); 298 - return ret; 298 + goto dev_put; 299 299 } 300 300 301 301 ret = pl111_modeset_init(drm);
+1 -1
drivers/gpu/drm/radeon/pptable.h
··· 450 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 451 UCHAR ucEntrySize; 452 452 453 - UCHAR clockInfo[] __counted_by(ucNumEntries); 453 + UCHAR clockInfo[] /*__counted_by(ucNumEntries)*/; 454 454 }ClockInfoArray; 455 455 456 456 typedef struct _NonClockInfoArray{
+27 -3
drivers/gpu/drm/tidss/tidss_kms.c
··· 26 26 27 27 tidss_runtime_get(tidss); 28 28 29 - drm_atomic_helper_commit_modeset_disables(ddev, old_state); 30 - drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY); 31 - drm_atomic_helper_commit_modeset_enables(ddev, old_state); 29 + /* 30 + * TI's OLDI and DSI encoders need to be set up before the crtc is 31 + * enabled. Thus drm_atomic_helper_commit_modeset_enables() and 32 + * drm_atomic_helper_commit_modeset_disables() cannot be used here, as 33 + * they enable the crtc before bridges' pre-enable, and disable the crtc 34 + * after bridges' post-disable. 35 + * 36 + * Open code the functions here and first call the bridges' pre-enables, 37 + * then crtc enable, then bridges' post-enable (and vice versa for 38 + * disable). 39 + */ 40 + 41 + drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state); 42 + drm_atomic_helper_commit_crtc_disable(ddev, old_state); 43 + drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state); 44 + 45 + drm_atomic_helper_update_legacy_modeset_state(ddev, old_state); 46 + drm_atomic_helper_calc_timestamping_constants(old_state); 47 + drm_atomic_helper_commit_crtc_set_mode(ddev, old_state); 48 + 49 + drm_atomic_helper_commit_planes(ddev, old_state, 50 + DRM_PLANE_COMMIT_ACTIVE_ONLY); 51 + 52 + drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state); 53 + drm_atomic_helper_commit_crtc_enable(ddev, old_state); 54 + drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state); 55 + drm_atomic_helper_commit_writebacks(ddev, old_state); 32 56 33 57 drm_atomic_helper_commit_hw_done(old_state); 34 58 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1 -1
drivers/gpu/nova-core/Kconfig
··· 3 3 depends on 64BIT 4 4 depends on PCI 5 5 depends on RUST 6 - depends on RUST_FW_LOADER_ABSTRACTIONS 6 + select RUST_FW_LOADER_ABSTRACTIONS 7 7 select AUXILIARY_BUS 8 8 default n 9 9 help
+8 -6
drivers/gpu/nova-core/gsp/cmdq.rs
··· 588 588 header.length(), 589 589 ); 590 590 591 + let payload_length = header.payload_length(); 592 + 591 593 // Check that the driver read area is large enough for the message. 592 - if slice_1.len() + slice_2.len() < header.length() { 594 + if slice_1.len() + slice_2.len() < payload_length { 593 595 return Err(EIO); 594 596 } 595 597 596 598 // Cut the message slices down to the actual length of the message. 597 - let (slice_1, slice_2) = if slice_1.len() > header.length() { 598 - // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`. 599 - (slice_1.split_at(header.length()).0, &slice_2[0..0]) 599 + let (slice_1, slice_2) = if slice_1.len() > payload_length { 600 + // PANIC: we checked above that `slice_1` is at least as long as `payload_length`. 601 + (slice_1.split_at(payload_length).0, &slice_2[0..0]) 600 602 } else { 601 603 ( 602 604 slice_1, 603 605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as 604 - // large as `msg_header.length()`. 605 - slice_2.split_at(header.length() - slice_1.len()).0, 606 + // large as `payload_length`. 607 + slice_2.split_at(payload_length - slice_1.len()).0, 606 608 ) 607 609 }; 608 610
+38 -40
drivers/gpu/nova-core/gsp/fw.rs
··· 141 141 // are valid. 142 142 unsafe impl FromBytes for GspFwWprMeta {} 143 143 144 - type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1; 145 - type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 144 + type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1; 145 + type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 146 146 147 147 impl GspFwWprMeta { 148 148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the ··· 150 150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self { 151 151 Self(bindings::GspFwWprMeta { 152 152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified. 153 - magic: r570_144::GSP_FW_WPR_META_MAGIC as u64, 154 - revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION), 153 + magic: bindings::GSP_FW_WPR_META_MAGIC as u64, 154 + revision: u64::from(bindings::GSP_FW_WPR_META_REVISION), 155 155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(), 156 156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size), 157 157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(), ··· 315 315 #[repr(u32)] 316 316 pub(crate) enum SeqBufOpcode { 317 317 // Core operation opcodes 318 - CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 - CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 - CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 - CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 318 + CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 + CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 + CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 + CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 322 322 323 323 // Delay opcode 324 - DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 324 + DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 325 325 326 326 // Register operation opcodes 327 - RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 - RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 - RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 - RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 327 + RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 + RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 + RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 + RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 331 331 } 332 332 333 333 impl fmt::Display for SeqBufOpcode { ··· 351 351 352 352 fn try_from(value: u32) -> Result<SeqBufOpcode> { 353 353 match value { 354 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 354 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 355 355 Ok(SeqBufOpcode::CoreReset) 356 356 } 357 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 357 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 358 358 Ok(SeqBufOpcode::CoreResume) 359 359 } 360 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 360 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 361 361 Ok(SeqBufOpcode::CoreStart) 362 362 } 363 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 363 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 364 364 Ok(SeqBufOpcode::CoreWaitForHalt) 365 365 } 366 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 366 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 368 368 Ok(SeqBufOpcode::RegModify) 369 369 } 370 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 370 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 373 373 _ => Err(EINVAL), 374 374 } 375 375 } ··· 385 385 /// Wrapper for GSP sequencer register write payload. 386 386 #[repr(transparent)] 387 387 #[derive(Copy, Clone)] 388 - pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 388 + pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 389 389 390 390 impl RegWritePayload { 391 391 /// Returns the register address. ··· 408 408 /// Wrapper for GSP sequencer register modify payload. 409 409 #[repr(transparent)] 410 410 #[derive(Copy, Clone)] 411 - pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 411 + pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 412 412 413 413 impl RegModifyPayload { 414 414 /// Returns the register address. ··· 436 436 /// Wrapper for GSP sequencer register poll payload. 437 437 #[repr(transparent)] 438 438 #[derive(Copy, Clone)] 439 - pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 439 + pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 440 440 441 441 impl RegPollPayload { 442 442 /// Returns the register address. ··· 469 469 /// Wrapper for GSP sequencer delay payload. 470 470 #[repr(transparent)] 471 471 #[derive(Copy, Clone)] 472 - pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 472 + pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 473 473 474 474 impl DelayUsPayload { 475 475 /// Returns the delay value in microseconds. ··· 487 487 /// Wrapper for GSP sequencer register store payload. 488 488 #[repr(transparent)] 489 489 #[derive(Copy, Clone)] 490 - pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 490 + pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 491 491 492 492 impl RegStorePayload { 493 493 /// Returns the register address. ··· 510 510 511 511 /// Wrapper for GSP sequencer buffer command. 512 512 #[repr(transparent)] 513 - pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD); 513 + pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD); 514 514 515 515 impl SequencerBufferCmd { 516 516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid. ··· 612 612 613 613 /// Wrapper for GSP run CPU sequencer RPC. 614 614 #[repr(transparent)] 615 - pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00); 615 + pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00); 616 616 617 617 impl RunCpuSequencer { 618 618 /// Returns the command index. ··· 797 797 } 798 798 } 799 799 800 - // SAFETY: We can't derive the Zeroable trait for this binding because the 801 - // procedural macro doesn't support the syntax used by bindgen to create the 802 - // __IncompleteArrayField types. So instead we implement it here, which is safe 803 - // because these are explicitly padded structures only containing types for 804 - // which any bit pattern, including all zeros, is valid. 805 - unsafe impl Zeroable for bindings::rpc_message_header_v {} 806 - 807 800 /// GSP Message Element. 808 801 /// 809 802 /// This is essentially a message header expected to be followed by the message data. ··· 846 853 self.inner.checkSum = checksum; 847 854 } 848 855 849 - /// Returns the total length of the message. 856 + /// Returns the length of the message's payload. 857 + pub(crate) fn payload_length(&self) -> usize { 858 + // `rpc.length` includes the length of the RPC message header. 859 + num::u32_as_usize(self.inner.rpc.length) 860 + .saturating_sub(size_of::<bindings::rpc_message_header_v>()) 861 + } 862 + 863 + /// Returns the total length of the message, message and RPC headers included. 850 864 pub(crate) fn length(&self) -> usize { 851 - // `rpc.length` includes the length of the GspRpcHeader but not the message header. 852 - size_of::<Self>() - size_of::<bindings::rpc_message_header_v>() 853 - + num::u32_as_usize(self.inner.rpc.length) 865 + size_of::<Self>() + self.payload_length() 854 866 } 855 867 856 868 // Returns the sequence number of the message.
+7 -4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
··· 24 24 unreachable_pub, 25 25 unsafe_op_in_unsafe_fn 26 26 )] 27 - use kernel::{ 28 - ffi, 29 - prelude::Zeroable, // 30 - }; 27 + use kernel::ffi; 28 + use pin_init::MaybeZeroable; 29 + 31 30 include!("r570_144/bindings.rs"); 31 + 32 + // SAFETY: This type has a size of zero, so its inclusion into another type should not affect their 33 + // ability to implement `Zeroable`. 34 + unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
+59 -46
drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
··· 320 320 pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131; 321 321 pub type _bindgen_ty_3 = ffi::c_uint; 322 322 #[repr(C)] 323 - #[derive(Debug, Default, Copy, Clone)] 323 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 324 324 pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS { 325 325 pub totalVFs: u32_, 326 326 pub firstVfOffset: u32_, 327 327 pub vfFeatureMask: u32_, 328 + pub __bindgen_padding_0: [u8; 4usize], 328 329 pub FirstVFBar0Address: u64_, 329 330 pub FirstVFBar1Address: u64_, 330 331 pub FirstVFBar2Address: u64_, ··· 341 340 pub bClientRmAllocatedCtxBuffer: u8_, 342 341 pub bNonPowerOf2ChannelCountSupported: u8_, 343 342 pub bVfResizableBAR1Supported: u8_, 343 + pub __bindgen_padding_1: [u8; 7usize], 344 344 } 345 345 #[repr(C)] 346 - #[derive(Debug, Default, Copy, Clone)] 346 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 347 347 pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS { 348 348 pub BoardID: u32_, 349 349 pub chipSKU: [ffi::c_char; 9usize], 350 350 pub chipSKUMod: [ffi::c_char; 5usize], 351 + pub __bindgen_padding_0: [u8; 2usize], 351 352 pub skuConfigVersion: u32_, 352 353 pub project: [ffi::c_char; 5usize], 353 354 pub projectSKU: [ffi::c_char; 5usize], 354 355 pub CDP: [ffi::c_char; 6usize], 355 356 pub projectSKUMod: [ffi::c_char; 2usize], 357 + pub __bindgen_padding_1: [u8; 2usize], 356 358 pub businessCycle: u32_, 357 359 } 358 360 pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize]; 359 361 #[repr(C)] 360 - #[derive(Debug, Default, Copy, Clone)] 362 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 361 363 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO { 362 364 pub base: u64_, 363 365 pub limit: u64_, ··· 372 368 pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG, 373 369 } 374 370 #[repr(C)] 375 - #[derive(Debug, Default, Copy, Clone)] 371 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 376 372 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS { 377 373 pub numFBRegions: u32_, 374 + pub __bindgen_padding_0: [u8; 4usize], 378 375 pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize], 379 376 } 380 377 #[repr(C)] 381 - #[derive(Debug, Copy, Clone)] 378 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 382 379 pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS { 383 380 pub index: u32_, 384 381 pub flags: u32_, ··· 396 391 } 397 392 } 398 393 #[repr(C)] 399 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 394 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 400 395 pub struct DOD_METHOD_DATA { 401 396 pub status: u32_, 402 397 pub acpiIdListLen: u32_, 403 398 pub acpiIdList: [u32_; 16usize], 404 399 } 405 400 #[repr(C)] 406 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 401 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 407 402 pub struct JT_METHOD_DATA { 408 403 pub status: u32_, 409 404 pub jtCaps: u32_, ··· 412 407 pub __bindgen_padding_0: u8, 413 408 } 414 409 #[repr(C)] 415 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 410 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 416 411 pub struct MUX_METHOD_DATA_ELEMENT { 417 412 pub acpiId: u32_, 418 413 pub mode: u32_, 419 414 pub status: u32_, 420 415 } 421 416 #[repr(C)] 422 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 417 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 423 418 pub struct MUX_METHOD_DATA { 424 419 pub tableLen: u32_, 425 420 pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize], ··· 427 422 pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize], 428 423 } 429 424 #[repr(C)] 430 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 425 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 431 426 pub struct CAPS_METHOD_DATA { 432 427 pub status: u32_, 433 428 pub optimusCaps: u32_, 434 429 } 435 430 #[repr(C)] 436 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 431 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 437 432 pub struct ACPI_METHOD_DATA { 438 433 pub bValid: u8_, 439 434 pub __bindgen_padding_0: [u8; 3usize], ··· 443 438 pub capsMethodData: CAPS_METHOD_DATA, 444 439 } 445 440 #[repr(C)] 446 - #[derive(Debug, Default, Copy, Clone)] 441 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 447 442 pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS { 448 443 pub headIndex: u32_, 449 444 pub maxHResolution: u32_, 450 445 pub maxVResolution: u32_, 451 446 } 452 447 #[repr(C)] 453 - #[derive(Debug, Default, Copy, Clone)] 448 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 454 449 pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS { 455 450 pub numHeads: u32_, 456 451 pub maxNumHeads: u32_, 457 452 } 458 453 #[repr(C)] 459 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 454 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 460 455 pub struct BUSINFO { 461 456 pub deviceID: u16_, 462 457 pub vendorID: u16_, ··· 466 461 pub __bindgen_padding_0: u8, 467 462 } 468 463 #[repr(C)] 469 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 464 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 470 465 pub struct GSP_VF_INFO { 471 466 pub totalVFs: u32_, 472 467 pub firstVFOffset: u32_, ··· 479 474 pub __bindgen_padding_0: [u8; 5usize], 480 475 } 481 476 #[repr(C)] 482 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 477 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 483 478 pub struct GSP_PCIE_CONFIG_REG { 484 479 pub linkCap: u32_, 485 480 } 486 481 #[repr(C)] 487 - #[derive(Debug, Default, Copy, Clone)] 482 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 488 483 pub struct EcidManufacturingInfo { 489 484 pub ecidLow: u32_, 490 485 pub ecidHigh: u32_, 491 486 pub ecidExtended: u32_, 492 487 } 493 488 #[repr(C)] 494 - #[derive(Debug, Default, Copy, Clone)] 489 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 495 490 pub struct FW_WPR_LAYOUT_OFFSET { 496 491 pub nonWprHeapOffset: u64_, 497 492 pub frtsOffset: u64_, 498 493 } 499 494 #[repr(C)] 500 - #[derive(Debug, Copy, Clone)] 495 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 501 496 pub struct GspStaticConfigInfo_t { 502 497 pub grCapsBits: [u8_; 23usize], 498 + pub __bindgen_padding_0: u8, 503 499 pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS, 504 500 pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS, 501 + pub __bindgen_padding_1: [u8; 4usize], 505 502 pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS, 506 503 pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS, 507 504 pub sriovMaxGfid: u32_, 508 505 pub engineCaps: [u32_; 3usize], 509 506 pub poisonFuseEnabled: u8_, 507 + pub __bindgen_padding_2: [u8; 7usize], 510 508 pub fb_length: u64_, 511 509 pub fbio_mask: u64_, 512 510 pub fb_bus_width: u32_, ··· 535 527 pub bIsMigSupported: u8_, 536 528 pub RTD3GC6TotalBoardPower: u16_, 537 529 pub RTD3GC6PerstDelay: u16_, 530 + pub __bindgen_padding_3: [u8; 2usize], 538 531 pub bar1PdeBase: u64_, 539 532 pub bar2PdeBase: u64_, 540 533 pub bVbiosValid: u8_, 534 + pub __bindgen_padding_4: [u8; 3usize], 541 535 pub vbiosSubVendor: u32_, 542 536 pub vbiosSubDevice: u32_, 543 537 pub bPageRetirementSupported: u8_, 544 538 pub bSplitVasBetweenServerClientRm: u8_, 545 539 pub bClRootportNeedsNosnoopWAR: u8_, 540 + pub __bindgen_padding_5: u8, 546 541 pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS, 547 542 pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS, 543 + pub __bindgen_padding_6: [u8; 4usize], 548 544 pub displaylessMaxPixels: u64_, 549 545 pub hInternalClient: u32_, 550 546 pub hInternalDevice: u32_, ··· 570 558 } 571 559 } 572 560 #[repr(C)] 573 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 561 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 574 562 pub struct GspSystemInfo { 575 563 pub gpuPhysAddr: u64_, 576 564 pub gpuPhysFbAddr: u64_, ··· 627 615 pub hostPageSize: u64_, 628 616 } 629 617 #[repr(C)] 630 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 618 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 631 619 pub struct MESSAGE_QUEUE_INIT_ARGUMENTS { 632 620 pub sharedMemPhysAddr: u64_, 633 621 pub pageTableEntryCount: u32_, ··· 636 624 pub statQueueOffset: u64_, 637 625 } 638 626 #[repr(C)] 639 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 627 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 640 628 pub struct GSP_SR_INIT_ARGUMENTS { 641 629 pub oldLevel: u32_, 642 630 pub flags: u32_, ··· 644 632 pub __bindgen_padding_0: [u8; 3usize], 645 633 } 646 634 #[repr(C)] 647 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 635 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 648 636 pub struct GSP_ARGUMENTS_CACHED { 649 637 pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS, 650 638 pub srInitArguments: GSP_SR_INIT_ARGUMENTS, ··· 654 642 pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1, 655 643 } 656 644 #[repr(C)] 657 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 645 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 658 646 pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 { 659 647 pub pa: u64_, 660 648 pub size: u64_, 661 649 } 662 650 #[repr(C)] 663 - #[derive(Copy, Clone, Zeroable)] 651 + #[derive(Copy, Clone, MaybeZeroable)] 664 652 pub union rpc_message_rpc_union_field_v03_00 { 665 653 pub spare: u32_, 666 654 pub cpuRmGfid: u32_, ··· 676 664 } 677 665 pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00; 678 666 #[repr(C)] 667 + #[derive(MaybeZeroable)] 679 668 pub struct rpc_message_header_v03_00 { 680 669 pub header_version: u32_, 681 670 pub signature: u32_, ··· 699 686 } 700 687 pub type rpc_message_header_v = rpc_message_header_v03_00; 701 688 #[repr(C)] 702 - #[derive(Copy, Clone, Zeroable)] 689 + #[derive(Copy, Clone, MaybeZeroable)] 703 690 pub struct GspFwWprMeta { 704 691 pub magic: u64_, 705 692 pub revision: u64_, ··· 734 721 pub verified: u64_, 735 722 } 736 723 #[repr(C)] 737 - #[derive(Copy, Clone, Zeroable)] 724 + #[derive(Copy, Clone, MaybeZeroable)] 738 725 pub union GspFwWprMeta__bindgen_ty_1 { 739 726 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1, 740 727 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2, 741 728 } 742 729 #[repr(C)] 743 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 730 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 744 731 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 { 745 732 pub sysmemAddrOfSignature: u64_, 746 733 pub sizeOfSignature: u64_, 747 734 } 748 735 #[repr(C)] 749 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 736 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 750 737 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 { 751 738 pub gspFwHeapFreeListWprOffset: u32_, 752 739 pub unused0: u32_, ··· 762 749 } 763 750 } 764 751 #[repr(C)] 765 - #[derive(Copy, Clone, Zeroable)] 752 + #[derive(Copy, Clone, MaybeZeroable)] 766 753 pub union GspFwWprMeta__bindgen_ty_2 { 767 754 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1, 768 755 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2, 769 756 } 770 757 #[repr(C)] 771 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 758 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 772 759 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 { 773 760 pub partitionRpcAddr: u64_, 774 761 pub partitionRpcRequestOffset: u16_, ··· 780 767 pub lsUcodeVersion: u32_, 781 768 } 782 769 #[repr(C)] 783 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 770 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 784 771 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 { 785 772 pub partitionRpcPadding: [u32_; 4usize], 786 773 pub sysmemAddrOfCrashReportQueue: u64_, ··· 815 802 pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2; 816 803 pub type LibosMemoryRegionLoc = ffi::c_uint; 817 804 #[repr(C)] 818 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 805 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 819 806 pub struct LibosMemoryRegionInitArgument { 820 807 pub id8: LibosAddress, 821 808 pub pa: LibosAddress, ··· 825 812 pub __bindgen_padding_0: [u8; 6usize], 826 813 } 827 814 #[repr(C)] 828 - #[derive(Debug, Default, Copy, Clone)] 815 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 829 816 pub struct PACKED_REGISTRY_ENTRY { 830 817 pub nameOffset: u32_, 831 818 pub type_: u8_, ··· 834 821 pub length: u32_, 835 822 } 836 823 #[repr(C)] 837 - #[derive(Debug, Default)] 824 + #[derive(Debug, Default, MaybeZeroable)] 838 825 pub struct PACKED_REGISTRY_TABLE { 839 826 pub size: u32_, 840 827 pub numEntries: u32_, 841 828 pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>, 842 829 } 843 830 #[repr(C)] 844 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 831 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 845 832 pub struct msgqTxHeader { 846 833 pub version: u32_, 847 834 pub size: u32_, ··· 853 840 pub entryOff: u32_, 854 841 } 855 842 #[repr(C)] 856 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 843 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 857 844 pub struct msgqRxHeader { 858 845 pub readPtr: u32_, 859 846 } 860 847 #[repr(C)] 861 848 #[repr(align(8))] 862 - #[derive(Zeroable)] 849 + #[derive(MaybeZeroable)] 863 850 pub struct GSP_MSG_QUEUE_ELEMENT { 864 851 pub authTagBuffer: [u8_; 16usize], 865 852 pub aadBuffer: [u8_; 16usize], ··· 879 866 } 880 867 } 881 868 #[repr(C)] 882 - #[derive(Debug, Default)] 869 + #[derive(Debug, Default, MaybeZeroable)] 883 870 pub struct rpc_run_cpu_sequencer_v17_00 { 884 871 pub bufferSizeDWord: u32_, 885 872 pub cmdIndex: u32_, ··· 897 884 pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8; 898 885 pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint; 899 886 #[repr(C)] 900 - #[derive(Debug, Default, Copy, Clone)] 887 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 901 888 pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE { 902 889 pub addr: u32_, 903 890 pub val: u32_, 904 891 } 905 892 #[repr(C)] 906 - #[derive(Debug, Default, Copy, Clone)] 893 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 907 894 pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY { 908 895 pub addr: u32_, 909 896 pub mask: u32_, 910 897 pub val: u32_, 911 898 } 912 899 #[repr(C)] 913 - #[derive(Debug, Default, Copy, Clone)] 900 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 914 901 pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL { 915 902 pub addr: u32_, 916 903 pub mask: u32_, ··· 919 906 pub error: u32_, 920 907 } 921 908 #[repr(C)] 922 - #[derive(Debug, Default, Copy, Clone)] 909 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 923 910 pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US { 924 911 pub val: u32_, 925 912 } 926 913 #[repr(C)] 927 - #[derive(Debug, Default, Copy, Clone)] 914 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 928 915 pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE { 929 916 pub addr: u32_, 930 917 pub index: u32_, 931 918 } 932 919 #[repr(C)] 933 - #[derive(Copy, Clone)] 920 + #[derive(Copy, Clone, MaybeZeroable)] 934 921 pub struct GSP_SEQUENCER_BUFFER_CMD { 935 922 pub opCode: GSP_SEQ_BUF_OPCODE, 936 923 pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1, 937 924 } 938 925 #[repr(C)] 939 - #[derive(Copy, Clone)] 926 + #[derive(Copy, Clone, MaybeZeroable)] 940 927 pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 { 941 928 pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE, 942 929 pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,
+2
drivers/hv/mshv_common.c
··· 142 142 } 143 143 EXPORT_SYMBOL_GPL(hv_call_get_partition_property); 144 144 145 + #ifdef CONFIG_X86 145 146 /* 146 147 * Corresponding sleep states have to be initialized in order for a subsequent 147 148 * HVCALL_ENTER_SLEEP_STATE call to succeed. Currently only S5 state as per ··· 238 237 BUG(); 239 238 240 239 } 240 + #endif
+11 -9
drivers/hv/mshv_regions.c
··· 58 58 59 59 page_order = folio_order(page_folio(page)); 60 60 /* The hypervisor only supports 4K and 2M page sizes */ 61 - if (page_order && page_order != HPAGE_PMD_ORDER) 61 + if (page_order && page_order != PMD_ORDER) 62 62 return -EINVAL; 63 63 64 64 stride = 1 << page_order; ··· 494 494 unsigned long mstart, mend; 495 495 int ret = -EPERM; 496 496 497 - if (mmu_notifier_range_blockable(range)) 498 - mutex_lock(&region->mutex); 499 - else if (!mutex_trylock(&region->mutex)) 500 - goto out_fail; 501 - 502 - mmu_interval_set_seq(mni, cur_seq); 503 - 504 497 mstart = max(range->start, region->start_uaddr); 505 498 mend = min(range->end, region->start_uaddr + 506 499 (region->nr_pages << HV_HYP_PAGE_SHIFT)); ··· 501 508 page_offset = HVPFN_DOWN(mstart - region->start_uaddr); 502 509 page_count = HVPFN_DOWN(mend - mstart); 503 510 511 + if (mmu_notifier_range_blockable(range)) 512 + mutex_lock(&region->mutex); 513 + else if (!mutex_trylock(&region->mutex)) 514 + goto out_fail; 515 + 516 + mmu_interval_set_seq(mni, cur_seq); 517 + 504 518 ret = mshv_region_remap_pages(region, HV_MAP_GPA_NO_ACCESS, 505 519 page_offset, page_count); 506 520 if (ret) 507 - goto out_fail; 521 + goto out_unlock; 508 522 509 523 mshv_region_invalidate_pages(region, page_offset, page_count); 510 524 ··· 519 519 520 520 return true; 521 521 522 + out_unlock: 523 + mutex_unlock(&region->mutex); 522 524 out_fail: 523 525 WARN_ONCE(ret, 524 526 "Failed to invalidate region %#llx-%#llx (range %#lx-%#lx, event: %u, pages %#llx-%#llx, mm: %#llx): %d\n",
+1 -1
drivers/iommu/generic_pt/.kunitconfig
··· 1 1 CONFIG_KUNIT=y 2 + CONFIG_COMPILE_TEST=y 2 3 CONFIG_GENERIC_PT=y 3 4 CONFIG_DEBUG_GENERIC_PT=y 4 5 CONFIG_IOMMU_PT=y ··· 12 11 CONFIG_DEBUG_KERNEL=y 13 12 CONFIG_FAULT_INJECTION=y 14 13 CONFIG_RUNTIME_TESTING_MENU=y 15 - CONFIG_IOMMUFD_TEST=y
+2 -2
drivers/iommu/generic_pt/pt_defs.h
··· 202 202 203 203 #define PT_SUPPORTED_FEATURE(feature_nr) (PT_SUPPORTED_FEATURES & BIT(feature_nr)) 204 204 205 - static inline bool pt_feature(const struct pt_common *common, 205 + static __always_inline bool pt_feature(const struct pt_common *common, 206 206 unsigned int feature_nr) 207 207 { 208 208 if (PT_FORCE_ENABLED_FEATURES & BIT(feature_nr)) ··· 212 212 return common->features & BIT(feature_nr); 213 213 } 214 214 215 - static inline bool pts_feature(const struct pt_state *pts, 215 + static __always_inline bool pts_feature(const struct pt_state *pts, 216 216 unsigned int feature_nr) 217 217 { 218 218 return pt_feature(pts->range->common, feature_nr);
+2 -1
drivers/iommu/iommufd/Kconfig
··· 41 41 depends on DEBUG_KERNEL 42 42 depends on FAULT_INJECTION 43 43 depends on RUNTIME_TESTING_MENU 44 - depends on IOMMU_PT_AMDV1 44 + depends on IOMMU_PT_AMDV1=y || IOMMUFD=IOMMU_PT_AMDV1 45 + select DMA_SHARED_BUFFER 45 46 select IOMMUFD_DRIVER 46 47 default n 47 48 help
+1 -1
drivers/irqchip/irq-gic-v5-its.c
··· 849 849 850 850 itte = gicv5_its_device_get_itte_ref(its_dev, event_id); 851 851 852 - if (FIELD_GET(GICV5_ITTL2E_VALID, *itte)) 852 + if (FIELD_GET(GICV5_ITTL2E_VALID, le64_to_cpu(*itte))) 853 853 return -EEXIST; 854 854 855 855 itt_entry = FIELD_PREP(GICV5_ITTL2E_LPI_ID, lpi) |
+8 -2
drivers/irqchip/irq-riscv-imsic-state.c
··· 477 477 lpriv = per_cpu_ptr(imsic->lpriv, cpu); 478 478 479 479 bitmap_free(lpriv->dirty_bitmap); 480 + kfree(lpriv->vectors); 480 481 } 481 482 482 483 free_percpu(imsic->lpriv); ··· 491 490 int cpu, i; 492 491 493 492 /* Allocate per-CPU private state */ 494 - imsic->lpriv = __alloc_percpu(struct_size(imsic->lpriv, vectors, global->nr_ids + 1), 495 - __alignof__(*imsic->lpriv)); 493 + imsic->lpriv = alloc_percpu(typeof(*imsic->lpriv)); 496 494 if (!imsic->lpriv) 497 495 return -ENOMEM; 498 496 ··· 510 510 /* Setup lazy timer for synchronization */ 511 511 timer_setup(&lpriv->timer, imsic_local_timer_callback, TIMER_PINNED); 512 512 #endif 513 + 514 + /* Allocate vector array */ 515 + lpriv->vectors = kcalloc(global->nr_ids + 1, sizeof(*lpriv->vectors), 516 + GFP_KERNEL); 517 + if (!lpriv->vectors) 518 + goto fail_local_cleanup; 513 519 514 520 /* Setup vector array */ 515 521 for (i = 0; i <= global->nr_ids; i++) {
+1 -1
drivers/irqchip/irq-riscv-imsic-state.h
··· 40 40 #endif 41 41 42 42 /* Local vector table */ 43 - struct imsic_vector vectors[]; 43 + struct imsic_vector *vectors; 44 44 }; 45 45 46 46 struct imsic_priv {
+11 -17
drivers/media/i2c/ov02c10.c
··· 165 165 {0x3809, 0x88}, 166 166 {0x380a, 0x04}, 167 167 {0x380b, 0x44}, 168 - {0x3810, 0x00}, 169 - {0x3811, 0x02}, 170 - {0x3812, 0x00}, 171 - {0x3813, 0x02}, 172 168 {0x3814, 0x01}, 173 169 {0x3815, 0x01}, 174 170 {0x3816, 0x01}, 175 171 {0x3817, 0x01}, 176 172 177 - {0x3820, 0xa0}, 173 + {0x3820, 0xa8}, 178 174 {0x3821, 0x00}, 179 175 {0x3822, 0x80}, 180 176 {0x3823, 0x08}, ··· 381 385 struct v4l2_ctrl *vblank; 382 386 struct v4l2_ctrl *hblank; 383 387 struct v4l2_ctrl *exposure; 384 - struct v4l2_ctrl *hflip; 385 - struct v4l2_ctrl *vflip; 386 388 387 389 struct clk *img_clk; 388 390 struct gpio_desc *reset; ··· 459 465 break; 460 466 461 467 case V4L2_CID_HFLIP: 468 + cci_write(ov02c10->regmap, OV02C10_ISP_X_WIN_CONTROL, 469 + ctrl->val ? 2 : 1, &ret); 462 470 cci_update_bits(ov02c10->regmap, OV02C10_ROTATE_CONTROL, 463 - BIT(3), ov02c10->hflip->val << 3, &ret); 471 + BIT(3), ctrl->val ? 0 : BIT(3), &ret); 464 472 break; 465 473 466 474 case V4L2_CID_VFLIP: 475 + cci_write(ov02c10->regmap, OV02C10_ISP_Y_WIN_CONTROL, 476 + ctrl->val ? 2 : 1, &ret); 467 477 cci_update_bits(ov02c10->regmap, OV02C10_ROTATE_CONTROL, 468 - BIT(4), ov02c10->vflip->val << 4, &ret); 478 + BIT(4), ctrl->val << 4, &ret); 469 479 break; 470 480 471 481 default: ··· 547 549 OV02C10_EXPOSURE_STEP, 548 550 exposure_max); 549 551 550 - ov02c10->hflip = v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, 551 - V4L2_CID_HFLIP, 0, 1, 1, 0); 552 - if (ov02c10->hflip) 553 - ov02c10->hflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; 552 + v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, V4L2_CID_HFLIP, 553 + 0, 1, 1, 0); 554 554 555 - ov02c10->vflip = v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, 556 - V4L2_CID_VFLIP, 0, 1, 1, 0); 557 - if (ov02c10->vflip) 558 - ov02c10->vflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; 555 + v4l2_ctrl_new_std(ctrl_hdlr, &ov02c10_ctrl_ops, V4L2_CID_VFLIP, 556 + 0, 1, 1, 0); 559 557 560 558 v4l2_ctrl_new_std_menu_items(ctrl_hdlr, &ov02c10_ctrl_ops, 561 559 V4L2_CID_TEST_PATTERN,
+3 -3
drivers/media/mc/mc-request.c
··· 315 315 316 316 fd_prepare_file(fdf)->private_data = req; 317 317 318 - *alloc_fd = fd_publish(fdf); 319 - 320 318 snprintf(req->debug_str, sizeof(req->debug_str), "%u:%d", 321 - atomic_inc_return(&mdev->request_id), *alloc_fd); 319 + atomic_inc_return(&mdev->request_id), fd_prepare_fd(fdf)); 322 320 dev_dbg(mdev->dev, "request: allocated %s\n", req->debug_str); 321 + 322 + *alloc_fd = fd_publish(fdf); 323 323 324 324 return 0; 325 325
+1 -1
drivers/media/pci/intel/Kconfig
··· 6 6 7 7 config IPU_BRIDGE 8 8 tristate "Intel IPU Bridge" 9 - depends on ACPI || COMPILE_TEST 9 + depends on ACPI 10 10 depends on I2C 11 11 help 12 12 The IPU bridge is a helper library for Intel IPU drivers to
+29
drivers/media/pci/intel/ipu-bridge.c
··· 5 5 #include <acpi/acpi_bus.h> 6 6 #include <linux/cleanup.h> 7 7 #include <linux/device.h> 8 + #include <linux/dmi.h> 8 9 #include <linux/i2c.h> 9 10 #include <linux/mei_cl_bus.h> 10 11 #include <linux/platform_device.h> ··· 97 96 IPU_SENSOR_CONFIG("SONY471A", 1, 200000000), 98 97 /* Toshiba T4KA3 */ 99 98 IPU_SENSOR_CONFIG("XMCC0003", 1, 321468000), 99 + }; 100 + 101 + /* 102 + * DMI matches for laptops which have their sensor mounted upside-down 103 + * without reporting a rotation of 180° in neither the SSDB nor the _PLD. 104 + */ 105 + static const struct dmi_system_id upside_down_sensor_dmi_ids[] = { 106 + { 107 + .matches = { 108 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 109 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 13 9350"), 110 + }, 111 + .driver_data = "OVTI02C1", 112 + }, 113 + { 114 + .matches = { 115 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 116 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 16 9640"), 117 + }, 118 + .driver_data = "OVTI02C1", 119 + }, 120 + {} /* Terminating entry */ 100 121 }; 101 122 102 123 static const struct ipu_property_names prop_names = { ··· 271 248 static u32 ipu_bridge_parse_rotation(struct acpi_device *adev, 272 249 struct ipu_sensor_ssdb *ssdb) 273 250 { 251 + const struct dmi_system_id *dmi_id; 252 + 253 + dmi_id = dmi_first_match(upside_down_sensor_dmi_ids); 254 + if (dmi_id && acpi_dev_hid_match(adev, dmi_id->driver_data)) 255 + return 180; 256 + 274 257 switch (ssdb->degree) { 275 258 case IPU_SENSOR_ROTATION_NORMAL: 276 259 return 0;
-7
drivers/media/platform/arm/mali-c55/mali-c55-params.c
··· 582 582 struct mali_c55 *mali_c55 = params->mali_c55; 583 583 int ret; 584 584 585 - if (config->version != MALI_C55_PARAM_BUFFER_V1) { 586 - dev_dbg(mali_c55->dev, 587 - "Unsupported extensible format version: %u\n", 588 - config->version); 589 - return -EINVAL; 590 - } 591 - 592 585 ret = v4l2_isp_params_validate_buffer_size(mali_c55->dev, vb, 593 586 v4l2_isp_params_buffer_size(MALI_C55_PARAMS_MAX_SIZE)); 594 587 if (ret)
+26 -15
drivers/media/platform/renesas/rzg2l-cru/rzg2l-csi2.c
··· 96 96 97 97 #define VSRSTS_RETRIES 20 98 98 99 - #define RZG2L_CSI2_MIN_WIDTH 320 100 - #define RZG2L_CSI2_MIN_HEIGHT 240 101 - #define RZG2L_CSI2_MAX_WIDTH 2800 102 - #define RZG2L_CSI2_MAX_HEIGHT 4095 103 - 104 - #define RZG2L_CSI2_DEFAULT_WIDTH RZG2L_CSI2_MIN_WIDTH 105 - #define RZG2L_CSI2_DEFAULT_HEIGHT RZG2L_CSI2_MIN_HEIGHT 106 99 #define RZG2L_CSI2_DEFAULT_FMT MEDIA_BUS_FMT_UYVY8_1X16 107 100 108 101 enum rzg2l_csi2_pads { ··· 130 137 int (*dphy_enable)(struct rzg2l_csi2 *csi2); 131 138 int (*dphy_disable)(struct rzg2l_csi2 *csi2); 132 139 bool has_system_clk; 140 + unsigned int min_width; 141 + unsigned int min_height; 142 + unsigned int max_width; 143 + unsigned int max_height; 133 144 }; 134 145 135 146 struct rzg2l_csi2_timings { ··· 415 418 .dphy_enable = rzg2l_csi2_dphy_enable, 416 419 .dphy_disable = rzg2l_csi2_dphy_disable, 417 420 .has_system_clk = true, 421 + .min_width = 320, 422 + .min_height = 240, 423 + .max_width = 2800, 424 + .max_height = 4095, 418 425 }; 419 426 420 427 static int rzg2l_csi2_dphy_setting(struct v4l2_subdev *sd, bool on) ··· 543 542 .dphy_enable = rzv2h_csi2_dphy_enable, 544 543 .dphy_disable = rzv2h_csi2_dphy_disable, 545 544 .has_system_clk = false, 545 + .min_width = 320, 546 + .min_height = 240, 547 + .max_width = 4096, 548 + .max_height = 4096, 546 549 }; 547 550 548 551 static int rzg2l_csi2_mipi_link_setting(struct v4l2_subdev *sd, bool on) ··· 636 631 struct v4l2_subdev_state *state, 637 632 struct v4l2_subdev_format *fmt) 638 633 { 634 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 639 635 struct v4l2_mbus_framefmt *src_format; 640 636 struct v4l2_mbus_framefmt *sink_format; 641 637 ··· 659 653 sink_format->ycbcr_enc = fmt->format.ycbcr_enc; 660 654 sink_format->quantization = fmt->format.quantization; 661 655 sink_format->width = clamp_t(u32, fmt->format.width, 662 - RZG2L_CSI2_MIN_WIDTH, RZG2L_CSI2_MAX_WIDTH); 656 + csi2->info->min_width, 657 + csi2->info->max_width); 663 658 sink_format->height = clamp_t(u32, fmt->format.height, 664 - RZG2L_CSI2_MIN_HEIGHT, RZG2L_CSI2_MAX_HEIGHT); 659 + csi2->info->min_height, 660 + csi2->info->max_height); 665 661 fmt->format = *sink_format; 666 662 667 663 /* propagate format to source pad */ ··· 676 668 struct v4l2_subdev_state *sd_state) 677 669 { 678 670 struct v4l2_subdev_format fmt = { .pad = RZG2L_CSI2_SINK, }; 671 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 679 672 680 - fmt.format.width = RZG2L_CSI2_DEFAULT_WIDTH; 681 - fmt.format.height = RZG2L_CSI2_DEFAULT_HEIGHT; 673 + fmt.format.width = csi2->info->min_width; 674 + fmt.format.height = csi2->info->min_height; 682 675 fmt.format.field = V4L2_FIELD_NONE; 683 676 fmt.format.code = RZG2L_CSI2_DEFAULT_FMT; 684 677 fmt.format.colorspace = V4L2_COLORSPACE_SRGB; ··· 706 697 struct v4l2_subdev_state *sd_state, 707 698 struct v4l2_subdev_frame_size_enum *fse) 708 699 { 700 + struct rzg2l_csi2 *csi2 = sd_to_csi2(sd); 701 + 709 702 if (fse->index != 0) 710 703 return -EINVAL; 711 704 712 705 if (!rzg2l_csi2_code_to_fmt(fse->code)) 713 706 return -EINVAL; 714 707 715 - fse->min_width = RZG2L_CSI2_MIN_WIDTH; 716 - fse->min_height = RZG2L_CSI2_MIN_HEIGHT; 717 - fse->max_width = RZG2L_CSI2_MAX_WIDTH; 718 - fse->max_height = RZG2L_CSI2_MAX_HEIGHT; 708 + fse->min_width = csi2->info->min_width; 709 + fse->min_height = csi2->info->min_height; 710 + fse->max_width = csi2->info->max_width; 711 + fse->max_height = csi2->info->max_height; 719 712 720 713 return 0; 721 714 }
+2
drivers/misc/mei/hw-me-regs.h
··· 122 122 123 123 #define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */ 124 124 125 + #define MEI_DEV_ID_NVL_S 0x6E68 /* Nova Lake Point S */ 126 + 125 127 /* 126 128 * MEI HW Section 127 129 */
+2
drivers/misc/mei/pci-me.c
··· 129 129 130 130 {MEI_PCI_DEVICE(MEI_DEV_ID_WCL_P, MEI_ME_PCH15_CFG)}, 131 131 132 + {MEI_PCI_DEVICE(MEI_DEV_ID_NVL_S, MEI_ME_PCH15_CFG)}, 133 + 132 134 /* required last entry */ 133 135 {0, } 134 136 };
+1 -5
drivers/misc/rp1/Kconfig
··· 5 5 6 6 config MISC_RP1 7 7 tristate "RaspberryPi RP1 misc device" 8 - depends on OF_IRQ && OF_OVERLAY && PCI_MSI && PCI_QUIRKS 9 - select PCI_DYNAMIC_OF_NODES 8 + depends on OF_IRQ && PCI_MSI 10 9 help 11 10 Support the RP1 peripheral chip found on Raspberry Pi 5 board. 12 11 ··· 14 15 15 16 The driver is responsible for enabling the DT node once the PCIe 16 17 endpoint has been configured, and handling interrupts. 17 - 18 - This driver uses an overlay to load other drivers to support for 19 - RP1 internal sub-devices.
+1 -2
drivers/misc/rp1/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_MISC_RP1) += rp1-pci.o 3 - rp1-pci-objs := rp1_pci.o rp1-pci.dtbo.o 2 + obj-$(CONFIG_MISC_RP1) += rp1_pci.o
-25
drivers/misc/rp1/rp1-pci.dtso
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - /* 4 - * The dts overlay is included from the dts directory so 5 - * it can be possible to check it with CHECK_DTBS while 6 - * also compile it from the driver source directory. 7 - */ 8 - 9 - /dts-v1/; 10 - /plugin/; 11 - 12 - / { 13 - fragment@0 { 14 - target-path=""; 15 - __overlay__ { 16 - compatible = "pci1de4,1"; 17 - #address-cells = <3>; 18 - #size-cells = <2>; 19 - interrupt-controller; 20 - #interrupt-cells = <2>; 21 - 22 - #include "arm64/broadcom/rp1-common.dtsi" 23 - }; 24 - }; 25 - };
+4 -33
drivers/misc/rp1/rp1_pci.c
··· 34 34 /* Interrupts */ 35 35 #define RP1_INT_END 61 36 36 37 - /* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */ 38 - extern char __dtbo_rp1_pci_begin[]; 39 - extern char __dtbo_rp1_pci_end[]; 40 - 41 37 struct rp1_dev { 42 38 struct pci_dev *pdev; 43 39 struct irq_domain *domain; 44 40 struct irq_data *pcie_irqds[64]; 45 41 void __iomem *bar1; 46 - int ovcs_id; /* overlay changeset id */ 47 42 bool level_triggered_irq[RP1_INT_END]; 48 43 }; 49 44 ··· 179 184 180 185 static int rp1_probe(struct pci_dev *pdev, const struct pci_device_id *id) 181 186 { 182 - u32 dtbo_size = __dtbo_rp1_pci_end - __dtbo_rp1_pci_begin; 183 - void *dtbo_start = __dtbo_rp1_pci_begin; 184 187 struct device *dev = &pdev->dev; 185 188 struct device_node *rp1_node; 186 - bool skip_ovl = true; 187 189 struct rp1_dev *rp1; 188 190 int err = 0; 189 191 int i; 190 192 191 - /* 192 - * Either use rp1_nexus node if already present in DT, or 193 - * set a flag to load it from overlay at runtime 194 - */ 195 - rp1_node = of_find_node_by_name(NULL, "rp1_nexus"); 196 - if (!rp1_node) { 197 - rp1_node = dev_of_node(dev); 198 - skip_ovl = false; 199 - } 193 + rp1_node = dev_of_node(dev); 200 194 201 195 if (!rp1_node) { 202 196 dev_err(dev, "Missing of_node for device\n"); ··· 260 276 rp1_chained_handle_irq, rp1); 261 277 } 262 278 263 - if (!skip_ovl) { 264 - err = of_overlay_fdt_apply(dtbo_start, dtbo_size, &rp1->ovcs_id, 265 - rp1_node); 266 - if (err) 267 - goto err_unregister_interrupts; 268 - } 269 - 270 279 err = of_platform_default_populate(rp1_node, NULL, dev); 271 280 if (err) { 272 281 dev_err_probe(&pdev->dev, err, "Error populating devicetree\n"); 273 - goto err_unload_overlay; 282 + goto err_unregister_interrupts; 274 283 } 275 284 276 - if (skip_ovl) 277 - of_node_put(rp1_node); 285 + of_node_put(rp1_node); 278 286 279 287 return 0; 280 288 281 - err_unload_overlay: 282 - of_overlay_remove(&rp1->ovcs_id); 283 289 err_unregister_interrupts: 284 290 rp1_unregister_interrupts(pdev); 285 291 err_put_node: 286 - if (skip_ovl) 287 - of_node_put(rp1_node); 292 + of_node_put(rp1_node); 288 293 289 294 return err; 290 295 } 291 296 292 297 static void rp1_remove(struct pci_dev *pdev) 293 298 { 294 - struct rp1_dev *rp1 = pci_get_drvdata(pdev); 295 299 struct device *dev = &pdev->dev; 296 300 297 301 of_platform_depopulate(dev); 298 - of_overlay_remove(&rp1->ovcs_id); 299 302 rp1_unregister_interrupts(pdev); 300 303 } 301 304
+1 -1
drivers/mtd/nand/ecc-sw-hamming.c
··· 8 8 * 9 9 * Completely replaces the previous ECC implementation which was written by: 10 10 * Steven J. Hill (sjhill@realitydiluted.com) 11 - * Thomas Gleixner (tglx@linutronix.de) 11 + * Thomas Gleixner (tglx@kernel.org) 12 12 * 13 13 * Information on how this algorithm works and how it was developed 14 14 * can be found in Documentation/driver-api/mtd/nand_ecc.rst
+1 -1
drivers/mtd/nand/raw/diskonchip.c
··· 11 11 * Error correction code lifted from the old docecc code 12 12 * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 13 13 * Copyright (C) 2000 Netgem S.A. 14 - * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@linutronix.de> 14 + * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@kernel.org> 15 15 * 16 16 * Interface to generic NAND code for M-Systems DiskOnChip devices 17 17 */
+2 -2
drivers/mtd/nand/raw/nand_base.c
··· 8 8 * http://www.linux-mtd.infradead.org/doc/nand.html 9 9 * 10 10 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 11 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 11 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 12 12 * 13 13 * Credits: 14 14 * David Woodhouse for adding multichip support ··· 6594 6594 6595 6595 MODULE_LICENSE("GPL"); 6596 6596 MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>"); 6597 - MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>"); 6597 + MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>"); 6598 6598 MODULE_DESCRIPTION("Generic NAND flash driver code");
+1 -1
drivers/mtd/nand/raw/nand_bbt.c
··· 3 3 * Overview: 4 4 * Bad block table support for the NAND driver 5 5 * 6 - * Copyright © 2004 Thomas Gleixner (tglx@linutronix.de) 6 + * Copyright © 2004 Thomas Gleixner (tglx@kernel.org) 7 7 * 8 8 * Description: 9 9 *
+1 -1
drivers/mtd/nand/raw/nand_ids.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (C) 2002 Thomas Gleixner (tglx@linutronix.de) 3 + * Copyright (C) 2002 Thomas Gleixner (tglx@kernel.org) 4 4 */ 5 5 6 6 #include <linux/sizes.h>
+1 -1
drivers/mtd/nand/raw/nand_jedec.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/nand_legacy.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/nand_onfi.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/ndfc.c
··· 272 272 module_platform_driver(ndfc_driver); 273 273 274 274 MODULE_LICENSE("GPL"); 275 - MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>"); 275 + MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>"); 276 276 MODULE_DESCRIPTION("OF Platform driver for NDFC");
+5 -2
drivers/net/can/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 3 menuconfig CAN_DEV 4 - bool "CAN Device Drivers" 4 + tristate "CAN Device Drivers" 5 5 default y 6 6 depends on CAN 7 7 help ··· 17 17 virtual ones. If you own such devices or plan to use the virtual CAN 18 18 interfaces to develop applications, say Y here. 19 19 20 - if CAN_DEV && CAN 20 + To compile as a module, choose M here: the module will be called 21 + can-dev. 22 + 23 + if CAN_DEV 21 24 22 25 config CAN_VCAN 23 26 tristate "Virtual Local CAN Interface (vcan)"
+1 -1
drivers/net/can/Makefile
··· 7 7 obj-$(CONFIG_CAN_VXCAN) += vxcan.o 8 8 obj-$(CONFIG_CAN_SLCAN) += slcan/ 9 9 10 - obj-$(CONFIG_CAN_DEV) += dev/ 10 + obj-y += dev/ 11 11 obj-y += esd/ 12 12 obj-y += rcar/ 13 13 obj-y += rockchip/
+1 -1
drivers/net/can/ctucanfd/ctucanfd_base.c
··· 310 310 } 311 311 312 312 ssp_cfg = FIELD_PREP(REG_TRV_DELAY_SSP_OFFSET, ssp_offset); 313 - ssp_cfg |= FIELD_PREP(REG_TRV_DELAY_SSP_SRC, 0x1); 313 + ssp_cfg |= FIELD_PREP(REG_TRV_DELAY_SSP_SRC, 0x0); 314 314 } 315 315 316 316 ctucan_write32(priv, CTUCANFD_TRV_DELAY, ssp_cfg);
+3 -2
drivers/net/can/dev/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - obj-$(CONFIG_CAN) += can-dev.o 3 + obj-$(CONFIG_CAN_DEV) += can-dev.o 4 4 5 - can-dev-$(CONFIG_CAN_DEV) += skb.o 5 + can-dev-y += skb.o 6 + 6 7 can-dev-$(CONFIG_CAN_CALC_BITTIMING) += calc_bittiming.o 7 8 can-dev-$(CONFIG_CAN_NETLINK) += bittiming.o 8 9 can-dev-$(CONFIG_CAN_NETLINK) += dev.o
+27
drivers/net/can/dev/dev.c
··· 375 375 } 376 376 } 377 377 378 + void can_set_cap_info(struct net_device *dev) 379 + { 380 + struct can_priv *priv = netdev_priv(dev); 381 + u32 can_cap; 382 + 383 + if (can_dev_in_xl_only_mode(priv)) { 384 + /* XL only mode => no CC/FD capability */ 385 + can_cap = CAN_CAP_XL; 386 + } else { 387 + /* mixed mode => CC + FD/XL capability */ 388 + can_cap = CAN_CAP_CC; 389 + 390 + if (priv->ctrlmode & CAN_CTRLMODE_FD) 391 + can_cap |= CAN_CAP_FD; 392 + 393 + if (priv->ctrlmode & CAN_CTRLMODE_XL) 394 + can_cap |= CAN_CAP_XL; 395 + } 396 + 397 + if (priv->ctrlmode & (CAN_CTRLMODE_LISTENONLY | 398 + CAN_CTRLMODE_RESTRICTED)) 399 + can_cap |= CAN_CAP_RO; 400 + 401 + can_set_cap(dev, can_cap); 402 + } 403 + 378 404 /* helper to define static CAN controller features at device creation time */ 379 405 int can_set_static_ctrlmode(struct net_device *dev, u32 static_mode) 380 406 { ··· 416 390 417 391 /* override MTU which was set by default in can_setup()? */ 418 392 can_set_default_mtu(dev); 393 + can_set_cap_info(dev); 419 394 420 395 return 0; 421 396 }
+1
drivers/net/can/dev/netlink.c
··· 377 377 } 378 378 379 379 can_set_default_mtu(dev); 380 + can_set_cap_info(dev); 380 381 381 382 return 0; 382 383 }
+1 -1
drivers/net/can/usb/etas_es58x/es58x_core.c
··· 1736 1736 dev_dbg(dev, "%s: Allocated %d rx URBs each of size %u\n", 1737 1737 __func__, i, rx_buf_len); 1738 1738 1739 - return ret; 1739 + return 0; 1740 1740 } 1741 1741 1742 1742 /**
+2
drivers/net/can/usb/gs_usb.c
··· 751 751 hf, parent->hf_size_rx, 752 752 gs_usb_receive_bulk_callback, parent); 753 753 754 + usb_anchor_urb(urb, &parent->rx_submitted); 755 + 754 756 rc = usb_submit_urb(urb, GFP_ATOMIC); 755 757 756 758 /* USB failure take down all interfaces */
+15
drivers/net/can/vcan.c
··· 130 130 return NETDEV_TX_OK; 131 131 } 132 132 133 + static void vcan_set_cap_info(struct net_device *dev) 134 + { 135 + u32 can_cap = CAN_CAP_CC; 136 + 137 + if (dev->mtu > CAN_MTU) 138 + can_cap |= CAN_CAP_FD; 139 + 140 + if (dev->mtu >= CANXL_MIN_MTU) 141 + can_cap |= CAN_CAP_XL; 142 + 143 + can_set_cap(dev, can_cap); 144 + } 145 + 133 146 static int vcan_change_mtu(struct net_device *dev, int new_mtu) 134 147 { 135 148 /* Do not allow changing the MTU while running */ ··· 154 141 return -EINVAL; 155 142 156 143 WRITE_ONCE(dev->mtu, new_mtu); 144 + vcan_set_cap_info(dev); 157 145 return 0; 158 146 } 159 147 ··· 176 162 dev->tx_queue_len = 0; 177 163 dev->flags = IFF_NOARP; 178 164 can_set_ml_priv(dev, netdev_priv(dev)); 165 + vcan_set_cap_info(dev); 179 166 180 167 /* set flags according to driver capabilities */ 181 168 if (echo)
+15
drivers/net/can/vxcan.c
··· 125 125 return iflink; 126 126 } 127 127 128 + static void vxcan_set_cap_info(struct net_device *dev) 129 + { 130 + u32 can_cap = CAN_CAP_CC; 131 + 132 + if (dev->mtu > CAN_MTU) 133 + can_cap |= CAN_CAP_FD; 134 + 135 + if (dev->mtu >= CANXL_MIN_MTU) 136 + can_cap |= CAN_CAP_XL; 137 + 138 + can_set_cap(dev, can_cap); 139 + } 140 + 128 141 static int vxcan_change_mtu(struct net_device *dev, int new_mtu) 129 142 { 130 143 /* Do not allow changing the MTU while running */ ··· 149 136 return -EINVAL; 150 137 151 138 WRITE_ONCE(dev->mtu, new_mtu); 139 + vxcan_set_cap_info(dev); 152 140 return 0; 153 141 } 154 142 ··· 181 167 182 168 can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN); 183 169 can_set_ml_priv(dev, can_ml); 170 + vxcan_set_cap_info(dev); 184 171 } 185 172 186 173 /* forward declaration for rtnl_create_link() */
+1 -1
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
··· 218 218 ioq_irq_err: 219 219 while (i) { 220 220 --i; 221 - free_irq(oct->msix_entries[i].vector, oct); 221 + free_irq(oct->msix_entries[i].vector, oct->ioq_vector[i]); 222 222 } 223 223 return -1; 224 224 }
+8 -5
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 962 962 }; 963 963 964 964 struct mlx5e_dev { 965 - struct mlx5e_priv *priv; 965 + struct net_device *netdev; 966 966 struct devlink_port dl_port; 967 967 }; 968 968 ··· 1242 1242 mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile); 1243 1243 int mlx5e_attach_netdev(struct mlx5e_priv *priv); 1244 1244 void mlx5e_detach_netdev(struct mlx5e_priv *priv); 1245 - void mlx5e_destroy_netdev(struct mlx5e_priv *priv); 1246 - int mlx5e_netdev_change_profile(struct mlx5e_priv *priv, 1247 - const struct mlx5e_profile *new_profile, void *new_ppriv); 1248 - void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv); 1245 + void mlx5e_destroy_netdev(struct net_device *netdev); 1246 + int mlx5e_netdev_change_profile(struct net_device *netdev, 1247 + struct mlx5_core_dev *mdev, 1248 + const struct mlx5e_profile *new_profile, 1249 + void *new_ppriv); 1250 + void mlx5e_netdev_attach_nic_profile(struct net_device *netdev, 1251 + struct mlx5_core_dev *mdev); 1249 1252 void mlx5e_set_netdev_mtu_boundaries(struct mlx5e_priv *priv); 1250 1253 void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16 mtu); 1251 1254
+56 -30
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 6326 6326 6327 6327 void mlx5e_priv_cleanup(struct mlx5e_priv *priv) 6328 6328 { 6329 + bool destroying = test_bit(MLX5E_STATE_DESTROYING, &priv->state); 6329 6330 int i; 6330 6331 6331 6332 /* bail if change profile failed and also rollback failed */ ··· 6354 6353 } 6355 6354 6356 6355 memset(priv, 0, sizeof(*priv)); 6356 + if (destroying) /* restore destroying bit, to allow unload */ 6357 + set_bit(MLX5E_STATE_DESTROYING, &priv->state); 6357 6358 } 6358 6359 6359 6360 static unsigned int mlx5e_get_max_num_txqs(struct mlx5_core_dev *mdev, ··· 6588 6585 return err; 6589 6586 } 6590 6587 6591 - int mlx5e_netdev_change_profile(struct mlx5e_priv *priv, 6592 - const struct mlx5e_profile *new_profile, void *new_ppriv) 6588 + int mlx5e_netdev_change_profile(struct net_device *netdev, 6589 + struct mlx5_core_dev *mdev, 6590 + const struct mlx5e_profile *new_profile, 6591 + void *new_ppriv) 6593 6592 { 6594 - const struct mlx5e_profile *orig_profile = priv->profile; 6595 - struct net_device *netdev = priv->netdev; 6596 - struct mlx5_core_dev *mdev = priv->mdev; 6597 - void *orig_ppriv = priv->ppriv; 6593 + struct mlx5e_priv *priv = netdev_priv(netdev); 6594 + const struct mlx5e_profile *orig_profile; 6598 6595 int err, rollback_err; 6596 + void *orig_ppriv; 6599 6597 6600 - /* cleanup old profile */ 6601 - mlx5e_detach_netdev(priv); 6602 - priv->profile->cleanup(priv); 6603 - mlx5e_priv_cleanup(priv); 6598 + orig_profile = priv->profile; 6599 + orig_ppriv = priv->ppriv; 6600 + 6601 + /* NULL could happen if previous change_profile failed to rollback */ 6602 + if (priv->profile) { 6603 + WARN_ON_ONCE(priv->mdev != mdev); 6604 + /* cleanup old profile */ 6605 + mlx5e_detach_netdev(priv); 6606 + priv->profile->cleanup(priv); 6607 + mlx5e_priv_cleanup(priv); 6608 + } 6609 + /* priv members are not valid from this point ... */ 6604 6610 6605 6611 if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 6606 6612 mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); ··· 6626 6614 return 0; 6627 6615 6628 6616 rollback: 6617 + if (!orig_profile) { 6618 + netdev_warn(netdev, "no original profile to rollback to\n"); 6619 + priv->profile = NULL; 6620 + return err; 6621 + } 6622 + 6629 6623 rollback_err = mlx5e_netdev_attach_profile(netdev, mdev, orig_profile, orig_ppriv); 6630 - if (rollback_err) 6631 - netdev_err(netdev, "%s: failed to rollback to orig profile, %d\n", 6632 - __func__, rollback_err); 6624 + if (rollback_err) { 6625 + netdev_err(netdev, "failed to rollback to orig profile, %d\n", 6626 + rollback_err); 6627 + priv->profile = NULL; 6628 + } 6633 6629 return err; 6634 6630 } 6635 6631 6636 - void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv) 6632 + void mlx5e_netdev_attach_nic_profile(struct net_device *netdev, 6633 + struct mlx5_core_dev *mdev) 6637 6634 { 6638 - mlx5e_netdev_change_profile(priv, &mlx5e_nic_profile, NULL); 6635 + mlx5e_netdev_change_profile(netdev, mdev, &mlx5e_nic_profile, NULL); 6639 6636 } 6640 6637 6641 - void mlx5e_destroy_netdev(struct mlx5e_priv *priv) 6638 + void mlx5e_destroy_netdev(struct net_device *netdev) 6642 6639 { 6643 - struct net_device *netdev = priv->netdev; 6640 + struct mlx5e_priv *priv = netdev_priv(netdev); 6644 6641 6645 - mlx5e_priv_cleanup(priv); 6642 + if (priv->profile) 6643 + mlx5e_priv_cleanup(priv); 6646 6644 free_netdev(netdev); 6647 6645 } 6648 6646 ··· 6660 6638 { 6661 6639 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6662 6640 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6663 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6664 - struct net_device *netdev = priv->netdev; 6641 + struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev); 6642 + struct net_device *netdev = mlx5e_dev->netdev; 6665 6643 struct mlx5_core_dev *mdev = edev->mdev; 6666 6644 struct mlx5_core_dev *pos, *to; 6667 6645 int err, i; ··· 6707 6685 6708 6686 static int _mlx5e_suspend(struct auxiliary_device *adev, bool pre_netdev_reg) 6709 6687 { 6688 + struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6710 6689 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6711 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6712 - struct net_device *netdev = priv->netdev; 6713 - struct mlx5_core_dev *mdev = priv->mdev; 6690 + struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev); 6691 + struct net_device *netdev = mlx5e_dev->netdev; 6692 + struct mlx5_core_dev *mdev = edev->mdev; 6714 6693 struct mlx5_core_dev *pos; 6715 6694 int i; 6716 6695 ··· 6772 6749 goto err_devlink_port_unregister; 6773 6750 } 6774 6751 SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port); 6752 + mlx5e_dev->netdev = netdev; 6775 6753 6776 6754 mlx5e_build_nic_netdev(netdev); 6777 6755 6778 6756 priv = netdev_priv(netdev); 6779 - mlx5e_dev->priv = priv; 6780 6757 6781 6758 priv->profile = profile; 6782 6759 priv->ppriv = NULL; ··· 6809 6786 err_profile_cleanup: 6810 6787 profile->cleanup(priv); 6811 6788 err_destroy_netdev: 6812 - mlx5e_destroy_netdev(priv); 6789 + mlx5e_destroy_netdev(netdev); 6813 6790 err_devlink_port_unregister: 6814 6791 mlx5e_devlink_port_unregister(mlx5e_dev); 6815 6792 err_devlink_unregister: ··· 6839 6816 { 6840 6817 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 6841 6818 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev); 6842 - struct mlx5e_priv *priv = mlx5e_dev->priv; 6819 + struct net_device *netdev = mlx5e_dev->netdev; 6820 + struct mlx5e_priv *priv = netdev_priv(netdev); 6843 6821 struct mlx5_core_dev *mdev = edev->mdev; 6844 6822 6845 6823 mlx5_core_uplink_netdev_set(mdev, NULL); 6846 - mlx5e_dcbnl_delete_app(priv); 6824 + 6825 + if (priv->profile) 6826 + mlx5e_dcbnl_delete_app(priv); 6847 6827 /* When unload driver, the netdev is in registered state 6848 6828 * if it's from legacy mode. If from switchdev mode, it 6849 6829 * is already unregistered before changing to NIC profile. 6850 6830 */ 6851 - if (priv->netdev->reg_state == NETREG_REGISTERED) { 6852 - unregister_netdev(priv->netdev); 6831 + if (netdev->reg_state == NETREG_REGISTERED) { 6832 + unregister_netdev(netdev); 6853 6833 _mlx5e_suspend(adev, false); 6854 6834 } else { 6855 6835 struct mlx5_core_dev *pos; ··· 6867 6841 /* Avoid cleanup if profile rollback failed. */ 6868 6842 if (priv->profile) 6869 6843 priv->profile->cleanup(priv); 6870 - mlx5e_destroy_netdev(priv); 6844 + mlx5e_destroy_netdev(netdev); 6871 6845 mlx5e_devlink_port_unregister(mlx5e_dev); 6872 6846 mlx5e_destroy_devlink(mlx5e_dev); 6873 6847 }
+7 -8
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1508 1508 { 1509 1509 struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep); 1510 1510 struct net_device *netdev; 1511 - struct mlx5e_priv *priv; 1512 1511 int err; 1513 1512 1514 1513 netdev = mlx5_uplink_netdev_get(dev); 1515 1514 if (!netdev) 1516 1515 return 0; 1517 1516 1518 - priv = netdev_priv(netdev); 1519 - rpriv->netdev = priv->netdev; 1520 - err = mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile, 1521 - rpriv); 1517 + /* must not use netdev_priv(netdev), it might not be initialized yet */ 1518 + rpriv->netdev = netdev; 1519 + err = mlx5e_netdev_change_profile(netdev, dev, 1520 + &mlx5e_uplink_rep_profile, rpriv); 1522 1521 mlx5_uplink_netdev_put(dev, netdev); 1523 1522 return err; 1524 1523 } ··· 1545 1546 if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY)) 1546 1547 unregister_netdev(netdev); 1547 1548 1548 - mlx5e_netdev_attach_nic_profile(priv); 1549 + mlx5e_netdev_attach_nic_profile(netdev, priv->mdev); 1549 1550 } 1550 1551 1551 1552 static int ··· 1611 1612 priv->profile->cleanup(priv); 1612 1613 1613 1614 err_destroy_netdev: 1614 - mlx5e_destroy_netdev(netdev_priv(netdev)); 1615 + mlx5e_destroy_netdev(netdev); 1615 1616 return err; 1616 1617 } 1617 1618 ··· 1666 1667 mlx5e_rep_vnic_reporter_destroy(priv); 1667 1668 mlx5e_detach_netdev(priv); 1668 1669 priv->profile->cleanup(priv); 1669 - mlx5e_destroy_netdev(priv); 1670 + mlx5e_destroy_netdev(netdev); 1670 1671 free_ppriv: 1671 1672 kvfree(ppriv); /* mlx5e_rep_priv */ 1672 1673 }
+3
drivers/net/hyperv/netvsc_drv.c
··· 1750 1750 rxfh->hfunc != ETH_RSS_HASH_TOP) 1751 1751 return -EOPNOTSUPP; 1752 1752 1753 + if (!ndc->rx_table_sz) 1754 + return -EOPNOTSUPP; 1755 + 1753 1756 rndis_dev = ndev->extension; 1754 1757 if (rxfh->indir) { 1755 1758 for (i = 0; i < ndc->rx_table_sz; i++)
+13 -7
drivers/net/macvlan.c
··· 59 59 60 60 struct macvlan_source_entry { 61 61 struct hlist_node hlist; 62 - struct macvlan_dev *vlan; 62 + struct macvlan_dev __rcu *vlan; 63 63 unsigned char addr[6+2] __aligned(sizeof(u16)); 64 64 struct rcu_head rcu; 65 65 }; ··· 146 146 147 147 hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 148 148 if (ether_addr_equal_64bits(entry->addr, addr) && 149 - entry->vlan == vlan) 149 + rcu_access_pointer(entry->vlan) == vlan) 150 150 return entry; 151 151 } 152 152 return NULL; ··· 168 168 return -ENOMEM; 169 169 170 170 ether_addr_copy(entry->addr, addr); 171 - entry->vlan = vlan; 171 + RCU_INIT_POINTER(entry->vlan, vlan); 172 172 h = &port->vlan_source_hash[macvlan_eth_hash(addr)]; 173 173 hlist_add_head_rcu(&entry->hlist, h); 174 174 vlan->macaddr_count++; ··· 187 187 188 188 static void macvlan_hash_del_source(struct macvlan_source_entry *entry) 189 189 { 190 + RCU_INIT_POINTER(entry->vlan, NULL); 190 191 hlist_del_rcu(&entry->hlist); 191 192 kfree_rcu(entry, rcu); 192 193 } ··· 391 390 int i; 392 391 393 392 hash_for_each_safe(port->vlan_source_hash, i, next, entry, hlist) 394 - if (entry->vlan == vlan) 393 + if (rcu_access_pointer(entry->vlan) == vlan) 395 394 macvlan_hash_del_source(entry); 396 395 397 396 vlan->macaddr_count = 0; ··· 434 433 435 434 hlist_for_each_entry_rcu(entry, h, hlist) { 436 435 if (ether_addr_equal_64bits(entry->addr, addr)) { 437 - if (entry->vlan->flags & MACVLAN_FLAG_NODST) 436 + struct macvlan_dev *vlan = rcu_dereference(entry->vlan); 437 + 438 + if (!vlan) 439 + continue; 440 + 441 + if (vlan->flags & MACVLAN_FLAG_NODST) 438 442 consume = true; 439 - macvlan_forward_source_one(skb, entry->vlan); 443 + macvlan_forward_source_one(skb, vlan); 440 444 } 441 445 } 442 446 ··· 1686 1680 struct macvlan_source_entry *entry; 1687 1681 1688 1682 hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) { 1689 - if (entry->vlan != vlan) 1683 + if (rcu_access_pointer(entry->vlan) != vlan) 1690 1684 continue; 1691 1685 if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr)) 1692 1686 return 1;
+2 -2
drivers/net/phy/motorcomm.c
··· 1745 1745 val |= YT8521_LED_1000_ON_EN; 1746 1746 1747 1747 if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules)) 1748 - val |= YT8521_LED_HDX_ON_EN; 1748 + val |= YT8521_LED_FDX_ON_EN; 1749 1749 1750 1750 if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules)) 1751 - val |= YT8521_LED_FDX_ON_EN; 1751 + val |= YT8521_LED_HDX_ON_EN; 1752 1752 1753 1753 if (test_bit(TRIGGER_NETDEV_TX, &rules) || 1754 1754 test_bit(TRIGGER_NETDEV_RX, &rules))
+43 -132
drivers/net/virtio_net.c
··· 425 425 u16 rss_indir_table_size; 426 426 u32 rss_hash_types_supported; 427 427 u32 rss_hash_types_saved; 428 - struct virtio_net_rss_config_hdr *rss_hdr; 429 - struct virtio_net_rss_config_trailer rss_trailer; 430 - u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; 431 428 432 429 /* Has control virtqueue */ 433 430 bool has_cvq; ··· 438 441 /* Packet virtio header size */ 439 442 u8 hdr_len; 440 443 441 - /* Work struct for delayed refilling if we run low on memory. */ 442 - struct delayed_work refill; 443 - 444 444 /* UDP tunnel support */ 445 445 bool tx_tnl; 446 446 447 447 bool rx_tnl; 448 448 449 449 bool rx_tnl_csum; 450 - 451 - /* Is delayed refill enabled? */ 452 - bool refill_enabled; 453 - 454 - /* The lock to synchronize the access to refill_enabled */ 455 - spinlock_t refill_lock; 456 450 457 451 /* Work struct for config space updates */ 458 452 struct work_struct config_work; ··· 481 493 struct failover *failover; 482 494 483 495 u64 device_stats_cap; 496 + 497 + struct virtio_net_rss_config_hdr *rss_hdr; 498 + 499 + /* Must be last as it ends in a flexible-array member. */ 500 + TRAILING_OVERLAP(struct virtio_net_rss_config_trailer, rss_trailer, hash_key_data, 501 + u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; 502 + ); 484 503 }; 504 + static_assert(offsetof(struct virtnet_info, rss_trailer.hash_key_data) == 505 + offsetof(struct virtnet_info, rss_hash_key_data)); 485 506 486 507 struct padded_vnet_hdr { 487 508 struct virtio_net_hdr_v1_hash hdr; ··· 715 718 give_pages(rq, buf); 716 719 else 717 720 put_page(virt_to_head_page(buf)); 718 - } 719 - 720 - static void enable_delayed_refill(struct virtnet_info *vi) 721 - { 722 - spin_lock_bh(&vi->refill_lock); 723 - vi->refill_enabled = true; 724 - spin_unlock_bh(&vi->refill_lock); 725 - } 726 - 727 - static void disable_delayed_refill(struct virtnet_info *vi) 728 - { 729 - spin_lock_bh(&vi->refill_lock); 730 - vi->refill_enabled = false; 731 - spin_unlock_bh(&vi->refill_lock); 732 721 } 733 722 734 723 static void enable_rx_mode_work(struct virtnet_info *vi) ··· 2931 2948 napi_disable(napi); 2932 2949 } 2933 2950 2934 - static void refill_work(struct work_struct *work) 2935 - { 2936 - struct virtnet_info *vi = 2937 - container_of(work, struct virtnet_info, refill.work); 2938 - bool still_empty; 2939 - int i; 2940 - 2941 - for (i = 0; i < vi->curr_queue_pairs; i++) { 2942 - struct receive_queue *rq = &vi->rq[i]; 2943 - 2944 - /* 2945 - * When queue API support is added in the future and the call 2946 - * below becomes napi_disable_locked, this driver will need to 2947 - * be refactored. 2948 - * 2949 - * One possible solution would be to: 2950 - * - cancel refill_work with cancel_delayed_work (note: 2951 - * non-sync) 2952 - * - cancel refill_work with cancel_delayed_work_sync in 2953 - * virtnet_remove after the netdev is unregistered 2954 - * - wrap all of the work in a lock (perhaps the netdev 2955 - * instance lock) 2956 - * - check netif_running() and return early to avoid a race 2957 - */ 2958 - napi_disable(&rq->napi); 2959 - still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); 2960 - virtnet_napi_do_enable(rq->vq, &rq->napi); 2961 - 2962 - /* In theory, this can happen: if we don't get any buffers in 2963 - * we will *never* try to fill again. 2964 - */ 2965 - if (still_empty) 2966 - schedule_delayed_work(&vi->refill, HZ/2); 2967 - } 2968 - } 2969 - 2970 2951 static int virtnet_receive_xsk_bufs(struct virtnet_info *vi, 2971 2952 struct receive_queue *rq, 2972 2953 int budget, ··· 2993 3046 else 2994 3047 packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats); 2995 3048 3049 + u64_stats_set(&stats.packets, packets); 2996 3050 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { 2997 - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { 2998 - spin_lock(&vi->refill_lock); 2999 - if (vi->refill_enabled) 3000 - schedule_delayed_work(&vi->refill, 0); 3001 - spin_unlock(&vi->refill_lock); 3002 - } 3051 + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) 3052 + /* We need to retry refilling in the next NAPI poll so 3053 + * we must return budget to make sure the NAPI is 3054 + * repolled. 3055 + */ 3056 + packets = budget; 3003 3057 } 3004 3058 3005 - u64_stats_set(&stats.packets, packets); 3006 3059 u64_stats_update_begin(&rq->stats.syncp); 3007 3060 for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) { 3008 3061 size_t offset = virtnet_rq_stats_desc[i].offset; ··· 3173 3226 struct virtnet_info *vi = netdev_priv(dev); 3174 3227 int i, err; 3175 3228 3176 - enable_delayed_refill(vi); 3177 - 3178 3229 for (i = 0; i < vi->max_queue_pairs; i++) { 3179 3230 if (i < vi->curr_queue_pairs) 3180 - /* Make sure we have some buffers: if oom use wq. */ 3181 - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) 3182 - schedule_delayed_work(&vi->refill, 0); 3231 + /* Pre-fill rq agressively, to make sure we are ready to 3232 + * get packets immediately. 3233 + */ 3234 + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL); 3183 3235 3184 3236 err = virtnet_enable_queue_pair(vi, i); 3185 3237 if (err < 0) ··· 3197 3251 return 0; 3198 3252 3199 3253 err_enable_qp: 3200 - disable_delayed_refill(vi); 3201 - cancel_delayed_work_sync(&vi->refill); 3202 - 3203 3254 for (i--; i >= 0; i--) { 3204 3255 virtnet_disable_queue_pair(vi, i); 3205 3256 virtnet_cancel_dim(vi, &vi->rq[i].dim); ··· 3375 3432 return NETDEV_TX_OK; 3376 3433 } 3377 3434 3378 - static void __virtnet_rx_pause(struct virtnet_info *vi, 3379 - struct receive_queue *rq) 3435 + static void virtnet_rx_pause(struct virtnet_info *vi, 3436 + struct receive_queue *rq) 3380 3437 { 3381 3438 bool running = netif_running(vi->dev); 3382 3439 ··· 3390 3447 { 3391 3448 int i; 3392 3449 3393 - /* 3394 - * Make sure refill_work does not run concurrently to 3395 - * avoid napi_disable race which leads to deadlock. 3396 - */ 3397 - disable_delayed_refill(vi); 3398 - cancel_delayed_work_sync(&vi->refill); 3399 3450 for (i = 0; i < vi->max_queue_pairs; i++) 3400 - __virtnet_rx_pause(vi, &vi->rq[i]); 3451 + virtnet_rx_pause(vi, &vi->rq[i]); 3401 3452 } 3402 3453 3403 - static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq) 3454 + static void virtnet_rx_resume(struct virtnet_info *vi, 3455 + struct receive_queue *rq, 3456 + bool refill) 3404 3457 { 3405 - /* 3406 - * Make sure refill_work does not run concurrently to 3407 - * avoid napi_disable race which leads to deadlock. 3408 - */ 3409 - disable_delayed_refill(vi); 3410 - cancel_delayed_work_sync(&vi->refill); 3411 - __virtnet_rx_pause(vi, rq); 3412 - } 3458 + if (netif_running(vi->dev)) { 3459 + /* Pre-fill rq agressively, to make sure we are ready to get 3460 + * packets immediately. 3461 + */ 3462 + if (refill) 3463 + try_fill_recv(vi, rq, GFP_KERNEL); 3413 3464 3414 - static void __virtnet_rx_resume(struct virtnet_info *vi, 3415 - struct receive_queue *rq, 3416 - bool refill) 3417 - { 3418 - bool running = netif_running(vi->dev); 3419 - bool schedule_refill = false; 3420 - 3421 - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) 3422 - schedule_refill = true; 3423 - if (running) 3424 3465 virtnet_napi_enable(rq); 3425 - 3426 - if (schedule_refill) 3427 - schedule_delayed_work(&vi->refill, 0); 3466 + } 3428 3467 } 3429 3468 3430 3469 static void virtnet_rx_resume_all(struct virtnet_info *vi) 3431 3470 { 3432 3471 int i; 3433 3472 3434 - enable_delayed_refill(vi); 3435 3473 for (i = 0; i < vi->max_queue_pairs; i++) { 3436 3474 if (i < vi->curr_queue_pairs) 3437 - __virtnet_rx_resume(vi, &vi->rq[i], true); 3475 + virtnet_rx_resume(vi, &vi->rq[i], true); 3438 3476 else 3439 - __virtnet_rx_resume(vi, &vi->rq[i], false); 3477 + virtnet_rx_resume(vi, &vi->rq[i], false); 3440 3478 } 3441 - } 3442 - 3443 - static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq) 3444 - { 3445 - enable_delayed_refill(vi); 3446 - __virtnet_rx_resume(vi, rq, true); 3447 3479 } 3448 3480 3449 3481 static int virtnet_rx_resize(struct virtnet_info *vi, ··· 3434 3516 if (err) 3435 3517 netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); 3436 3518 3437 - virtnet_rx_resume(vi, rq); 3519 + virtnet_rx_resume(vi, rq, true); 3438 3520 return err; 3439 3521 } 3440 3522 ··· 3747 3829 } 3748 3830 succ: 3749 3831 vi->curr_queue_pairs = queue_pairs; 3750 - /* virtnet_open() will refill when device is going to up. */ 3751 - spin_lock_bh(&vi->refill_lock); 3752 - if (dev->flags & IFF_UP && vi->refill_enabled) 3753 - schedule_delayed_work(&vi->refill, 0); 3754 - spin_unlock_bh(&vi->refill_lock); 3832 + if (dev->flags & IFF_UP) { 3833 + local_bh_disable(); 3834 + for (int i = 0; i < vi->curr_queue_pairs; ++i) 3835 + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq); 3836 + local_bh_enable(); 3837 + } 3755 3838 3756 3839 return 0; 3757 3840 } ··· 3762 3843 struct virtnet_info *vi = netdev_priv(dev); 3763 3844 int i; 3764 3845 3765 - /* Make sure NAPI doesn't schedule refill work */ 3766 - disable_delayed_refill(vi); 3767 - /* Make sure refill_work doesn't re-enable napi! */ 3768 - cancel_delayed_work_sync(&vi->refill); 3769 3846 /* Prevent the config change callback from changing carrier 3770 3847 * after close 3771 3848 */ ··· 5717 5802 5718 5803 virtio_device_ready(vdev); 5719 5804 5720 - enable_delayed_refill(vi); 5721 5805 enable_rx_mode_work(vi); 5722 5806 5723 5807 if (netif_running(vi->dev)) { ··· 5806 5892 5807 5893 rq->xsk_pool = pool; 5808 5894 5809 - virtnet_rx_resume(vi, rq); 5895 + virtnet_rx_resume(vi, rq, true); 5810 5896 5811 5897 if (pool) 5812 5898 return 0; ··· 6473 6559 if (!vi->rq) 6474 6560 goto err_rq; 6475 6561 6476 - INIT_DELAYED_WORK(&vi->refill, refill_work); 6477 6562 for (i = 0; i < vi->max_queue_pairs; i++) { 6478 6563 vi->rq[i].pages = NULL; 6479 6564 netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll, ··· 6814 6901 6815 6902 INIT_WORK(&vi->config_work, virtnet_config_changed_work); 6816 6903 INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work); 6817 - spin_lock_init(&vi->refill_lock); 6818 6904 6819 6905 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { 6820 6906 vi->mergeable_rx_bufs = true; ··· 7077 7165 net_failover_destroy(vi->failover); 7078 7166 free_vqs: 7079 7167 virtio_reset_device(vdev); 7080 - cancel_delayed_work_sync(&vi->refill); 7081 7168 free_receive_page_frags(vi); 7082 7169 virtnet_del_vqs(vi); 7083 7170 free:
+3 -34
drivers/pci/controller/dwc/pci-meson.c
··· 37 37 #define PCIE_CFG_STATUS17 0x44 38 38 #define PM_CURRENT_STATE(x) (((x) >> 7) & 0x1) 39 39 40 - #define WAIT_LINKUP_TIMEOUT 4000 41 40 #define PORT_CLK_RATE 100000000UL 42 41 #define MAX_PAYLOAD_SIZE 256 43 42 #define MAX_READ_REQ_SIZE 256 ··· 349 350 static bool meson_pcie_link_up(struct dw_pcie *pci) 350 351 { 351 352 struct meson_pcie *mp = to_meson_pcie(pci); 352 - struct device *dev = pci->dev; 353 - u32 speed_okay = 0; 354 - u32 cnt = 0; 355 - u32 state12, state17, smlh_up, ltssm_up, rdlh_up; 353 + u32 state12; 356 354 357 - do { 358 - state12 = meson_cfg_readl(mp, PCIE_CFG_STATUS12); 359 - state17 = meson_cfg_readl(mp, PCIE_CFG_STATUS17); 360 - smlh_up = IS_SMLH_LINK_UP(state12); 361 - rdlh_up = IS_RDLH_LINK_UP(state12); 362 - ltssm_up = IS_LTSSM_UP(state12); 363 - 364 - if (PM_CURRENT_STATE(state17) < PCIE_GEN3) 365 - speed_okay = 1; 366 - 367 - if (smlh_up) 368 - dev_dbg(dev, "smlh_link_up is on\n"); 369 - if (rdlh_up) 370 - dev_dbg(dev, "rdlh_link_up is on\n"); 371 - if (ltssm_up) 372 - dev_dbg(dev, "ltssm_up is on\n"); 373 - if (speed_okay) 374 - dev_dbg(dev, "speed_okay\n"); 375 - 376 - if (smlh_up && rdlh_up && ltssm_up && speed_okay) 377 - return true; 378 - 379 - cnt++; 380 - 381 - udelay(10); 382 - } while (cnt < WAIT_LINKUP_TIMEOUT); 383 - 384 - dev_err(dev, "error: wait linkup timeout\n"); 385 - return false; 355 + state12 = meson_cfg_readl(mp, PCIE_CFG_STATUS12); 356 + return IS_SMLH_LINK_UP(state12) && IS_RDLH_LINK_UP(state12); 386 357 } 387 358 388 359 static int meson_pcie_host_init(struct dw_pcie_rp *pp)
+3 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 1047 1047 writel(WR_NO_SNOOP_OVERRIDE_EN | RD_NO_SNOOP_OVERRIDE_EN, 1048 1048 pcie->parf + PARF_NO_SNOOP_OVERRIDE); 1049 1049 1050 - qcom_pcie_clear_aspm_l0s(pcie->pci); 1051 1050 qcom_pcie_clear_hpc(pcie->pci); 1052 1051 1053 1052 return 0; ··· 1315 1316 goto err_disable_phy; 1316 1317 } 1317 1318 1319 + qcom_pcie_clear_aspm_l0s(pcie->pci); 1320 + 1318 1321 qcom_ep_reset_deassert(pcie); 1319 1322 1320 1323 if (pcie->cfg->ops->config_sid) { ··· 1465 1464 1466 1465 static const struct qcom_pcie_cfg cfg_2_3_2 = { 1467 1466 .ops = &ops_2_3_2, 1467 + .no_l0s = true, 1468 1468 }; 1469 1469 1470 1470 static const struct qcom_pcie_cfg cfg_2_3_3 = {
-1
drivers/pci/quirks.c
··· 6308 6308 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node); 6309 6309 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node); 6310 6310 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node); 6311 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RPI, PCI_DEVICE_ID_RPI_RP1_C0, of_pci_make_dev_node); 6312 6311 6313 6312 /* 6314 6313 * Devices known to require a longer delay before first config space access
-7
drivers/pci/vgaarb.c
··· 652 652 return true; 653 653 } 654 654 655 - /* 656 - * Vgadev has neither IO nor MEM enabled. If we haven't found any 657 - * other VGA devices, it is the best candidate so far. 658 - */ 659 - if (!boot_vga) 660 - return true; 661 - 662 655 return false; 663 656 } 664 657
+1
drivers/pinctrl/Kconfig
··· 491 491 depends on ARCH_MICROCHIP || COMPILE_TEST 492 492 depends on OF 493 493 select GENERIC_PINCONF 494 + select REGMAP_MMIO 494 495 default y 495 496 help 496 497 This selects the pinctrl driver for gpio2 on pic64gx.
+1 -1
drivers/pinctrl/mediatek/pinctrl-mt8189.c
··· 1642 1642 }; 1643 1643 1644 1644 static const char * const mt8189_pinctrl_register_base_names[] = { 1645 - "base", "lm", "rb0", "rb1", "bm0", "bm1", "bm2", "lt0", "lt1", "rt", 1645 + "base", "bm0", "bm1", "bm2", "lm", "lt0", "lt1", "rb0", "rb1", "rt", 1646 1646 }; 1647 1647 1648 1648 static const struct mtk_eint_hw mt8189_eint_hw = {
+1 -1
drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
··· 498 498 pctrl->chip.base = -1; 499 499 pctrl->chip.ngpio = data->npins; 500 500 pctrl->chip.label = dev_name(dev); 501 - pctrl->chip.can_sleep = false; 501 + pctrl->chip.can_sleep = true; 502 502 503 503 mutex_init(&pctrl->lock); 504 504
+4 -3
drivers/resctrl/mpam_devices.c
··· 1072 1072 u64 now; 1073 1073 bool nrdy = false; 1074 1074 bool config_mismatch; 1075 - bool overflow; 1075 + bool overflow = false; 1076 1076 struct mon_read *m = arg; 1077 1077 struct mon_cfg *ctx = m->ctx; 1078 1078 bool reset_on_next_read = false; ··· 1176 1176 } 1177 1177 mpam_mon_sel_unlock(msc); 1178 1178 1179 - if (nrdy) { 1179 + if (nrdy) 1180 1180 m->err = -EBUSY; 1181 + 1182 + if (m->err) 1181 1183 return; 1182 - } 1183 1184 1184 1185 *m->val += now; 1185 1186 }
+1 -1
drivers/scsi/bfa/bfa_fcs.c
··· 1169 1169 * This function should be used only if there is any requirement 1170 1170 * to check for FOS version below 6.3. 1171 1171 * To check if the attached fabric is a brocade fabric, use 1172 - * bfa_lps_is_brcd_fabric() which works for FOS versions 6.3 1172 + * fabric->lps->brcd_switch which works for FOS versions 6.3 1173 1173 * or above only. 1174 1174 */ 1175 1175
+24
drivers/scsi/scsi_error.c
··· 1063 1063 unsigned char *cmnd, int cmnd_size, unsigned sense_bytes) 1064 1064 { 1065 1065 struct scsi_device *sdev = scmd->device; 1066 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1067 + struct request *rq = scsi_cmd_to_rq(scmd); 1068 + #endif 1066 1069 1067 1070 /* 1068 1071 * We need saved copies of a number of fields - this is because ··· 1118 1115 (sdev->lun << 5 & 0xe0); 1119 1116 1120 1117 /* 1118 + * Encryption must be disabled for the commands submitted by the error handler. 1119 + * Hence, clear the encryption context information. 1120 + */ 1121 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1122 + ses->rq_crypt_keyslot = rq->crypt_keyslot; 1123 + ses->rq_crypt_ctx = rq->crypt_ctx; 1124 + 1125 + rq->crypt_keyslot = NULL; 1126 + rq->crypt_ctx = NULL; 1127 + #endif 1128 + 1129 + /* 1121 1130 * Zero the sense buffer. The scsi spec mandates that any 1122 1131 * untransferred sense data should be interpreted as being zero. 1123 1132 */ ··· 1146 1131 */ 1147 1132 void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses) 1148 1133 { 1134 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1135 + struct request *rq = scsi_cmd_to_rq(scmd); 1136 + #endif 1137 + 1149 1138 /* 1150 1139 * Restore original data 1151 1140 */ ··· 1162 1143 scmd->underflow = ses->underflow; 1163 1144 scmd->prot_op = ses->prot_op; 1164 1145 scmd->eh_eflags = ses->eh_eflags; 1146 + 1147 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 1148 + rq->crypt_keyslot = ses->rq_crypt_keyslot; 1149 + rq->crypt_ctx = ses->rq_crypt_ctx; 1150 + #endif 1165 1151 } 1166 1152 EXPORT_SYMBOL(scsi_eh_restore_cmnd); 1167 1153
+1 -1
drivers/scsi/scsi_lib.c
··· 2459 2459 * @retries: number of retries before failing 2460 2460 * @sshdr: outpout pointer for decoded sense information. 2461 2461 * 2462 - * Returns zero if unsuccessful or an error if TUR failed. For 2462 + * Returns zero if successful or an error if TUR failed. For 2463 2463 * removable media, UNIT_ATTENTION sets ->changed flag. 2464 2464 **/ 2465 2465 int
+4 -3
drivers/ufs/core/ufshcd.c
··· 10736 10736 if (is_mcq_supported(hba)) { 10737 10737 ufshcd_mcq_enable(hba); 10738 10738 err = ufshcd_alloc_mcq(hba); 10739 - if (!err) { 10740 - ufshcd_config_mcq(hba); 10741 - } else { 10739 + if (err) { 10742 10740 /* Continue with SDB mode */ 10743 10741 ufshcd_mcq_disable(hba); 10744 10742 use_mcq_mode = false; ··· 11008 11010 err = ufshcd_link_startup(hba); 11009 11011 if (err) 11010 11012 goto out_disable; 11013 + 11014 + if (hba->mcq_enabled) 11015 + ufshcd_config_mcq(hba); 11011 11016 11012 11017 if (hba->quirks & UFSHCD_QUIRK_SKIP_PH_CONFIGURATION) 11013 11018 goto initialized;
+1 -1
drivers/ufs/host/ufs-mediatek.c
··· 1112 1112 unsigned long flags; 1113 1113 u32 ah_ms = 10; 1114 1114 u32 ah_scale, ah_timer; 1115 - u32 scale_us[] = {1, 10, 100, 1000, 10000, 100000}; 1115 + static const u32 scale_us[] = {1, 10, 100, 1000, 10000, 100000}; 1116 1116 1117 1117 if (ufshcd_is_clkgating_allowed(hba)) { 1118 1118 if (ufshcd_is_auto_hibern8_supported(hba) && hba->ahit) {
+1 -1
drivers/uio/uio.c
··· 3 3 * drivers/uio/uio.c 4 4 * 5 5 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de> 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de> 8 8 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com> 9 9 *
+7 -6
drivers/xen/acpi.c
··· 89 89 int *trigger_out, 90 90 int *polarity_out) 91 91 { 92 - int gsi; 92 + u32 gsi; 93 93 u8 pin; 94 94 struct acpi_prt_entry *entry; 95 95 int trigger = ACPI_LEVEL_SENSITIVE; 96 - int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ? 96 + int ret, polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ? 97 97 ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW; 98 98 99 99 if (!dev || !gsi_out || !trigger_out || !polarity_out) ··· 105 105 106 106 entry = acpi_pci_irq_lookup(dev, pin); 107 107 if (entry) { 108 + ret = 0; 108 109 if (entry->link) 109 - gsi = acpi_pci_link_allocate_irq(entry->link, 110 + ret = acpi_pci_link_allocate_irq(entry->link, 110 111 entry->index, 111 112 &trigger, &polarity, 112 - NULL); 113 + NULL, &gsi); 113 114 else 114 115 gsi = entry->index; 115 116 } else 116 - gsi = -1; 117 + ret = -ENODEV; 117 118 118 - if (gsi < 0) 119 + if (ret < 0) 119 120 return -EINVAL; 120 121 121 122 *gsi_out = gsi;
+1
fs/btrfs/disk-io.c
··· 2255 2255 BTRFS_DATA_RELOC_TREE_OBJECTID, true); 2256 2256 if (IS_ERR(root)) { 2257 2257 if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) { 2258 + location.objectid = BTRFS_DATA_RELOC_TREE_OBJECTID; 2258 2259 ret = PTR_ERR(root); 2259 2260 goto out; 2260 2261 }
+32 -9
fs/btrfs/inode.c
··· 481 481 ASSERT(size <= sectorsize); 482 482 483 483 /* 484 - * The compressed size also needs to be no larger than a sector. 485 - * That's also why we only need one page as the parameter. 484 + * The compressed size also needs to be no larger than a page. 485 + * That's also why we only need one folio as the parameter. 486 486 */ 487 - if (compressed_folio) 487 + if (compressed_folio) { 488 488 ASSERT(compressed_size <= sectorsize); 489 - else 489 + ASSERT(compressed_size <= PAGE_SIZE); 490 + } else { 490 491 ASSERT(compressed_size == 0); 492 + } 491 493 492 494 if (compressed_size && compressed_folio) 493 495 cur_size = compressed_size; ··· 574 572 575 573 /* Inline extents must start at offset 0. */ 576 574 if (offset != 0) 575 + return false; 576 + 577 + /* 578 + * Even for bs > ps cases, cow_file_range_inline() can only accept a 579 + * single folio. 580 + * 581 + * This can be problematic and cause access beyond page boundary if a 582 + * page sized folio is passed into that function. 583 + * And encoded write is doing exactly that. 584 + * So here limits the inlined extent size to PAGE_SIZE. 585 + */ 586 + if (size > PAGE_SIZE || compressed_size > PAGE_SIZE) 577 587 return false; 578 588 579 589 /* Inline extents are limited to sectorsize. */ ··· 4048 4034 btrfs_set_inode_mapping_order(inode); 4049 4035 4050 4036 cache_index: 4051 - ret = btrfs_init_file_extent_tree(inode); 4052 - if (ret) 4053 - goto out; 4054 - btrfs_inode_set_file_extent_range(inode, 0, 4055 - round_up(i_size_read(vfs_inode), fs_info->sectorsize)); 4056 4037 /* 4057 4038 * If we were modified in the current generation and evicted from memory 4058 4039 * and then re-read we need to do a full sync since we don't have any ··· 4133 4124 "error loading props for ino %llu (root %llu): %d", 4134 4125 btrfs_ino(inode), btrfs_root_id(root), ret); 4135 4126 } 4127 + 4128 + /* 4129 + * We don't need the path anymore, so release it to avoid holding a read 4130 + * lock on a leaf while calling btrfs_init_file_extent_tree(), which can 4131 + * allocate memory that triggers reclaim (GFP_KERNEL) and cause a locking 4132 + * dependency. 4133 + */ 4134 + btrfs_release_path(path); 4135 + 4136 + ret = btrfs_init_file_extent_tree(inode); 4137 + if (ret) 4138 + goto out; 4139 + btrfs_inode_set_file_extent_range(inode, 0, 4140 + round_up(i_size_read(vfs_inode), fs_info->sectorsize)); 4136 4141 4137 4142 if (!maybe_acls) 4138 4143 cache_no_acl(vfs_inode);
+5 -7
fs/btrfs/super.c
··· 736 736 */ 737 737 void btrfs_set_free_space_cache_settings(struct btrfs_fs_info *fs_info) 738 738 { 739 - if (fs_info->sectorsize < PAGE_SIZE) { 739 + if (fs_info->sectorsize != PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) { 740 + btrfs_info(fs_info, 741 + "forcing free space tree for sector size %u with page size %lu", 742 + fs_info->sectorsize, PAGE_SIZE); 740 743 btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE); 741 - if (!btrfs_test_opt(fs_info, FREE_SPACE_TREE)) { 742 - btrfs_info(fs_info, 743 - "forcing free space tree for sector size %u with page size %lu", 744 - fs_info->sectorsize, PAGE_SIZE); 745 - btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE); 746 - } 744 + btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE); 747 745 } 748 746 749 747 /*
+1 -1
fs/btrfs/tree-log.c
··· 190 190 191 191 btrfs_abort_transaction(wc->trans, error); 192 192 193 - if (wc->subvol_path->nodes[0]) { 193 + if (wc->subvol_path && wc->subvol_path->nodes[0]) { 194 194 btrfs_crit(fs_info, 195 195 "subvolume (root %llu) leaf currently being processed:", 196 196 btrfs_root_id(wc->root));
+2 -1
fs/ecryptfs/inode.c
··· 533 533 fsstack_copy_inode_size(dir, lower_dir); 534 534 set_nlink(dir, lower_dir->i_nlink); 535 535 out: 536 + dput(lower_dir_dentry); 536 537 end_creating(lower_dentry); 537 538 if (d_really_is_negative(dentry)) 538 539 d_drop(dentry); ··· 585 584 fsstack_copy_attr_times(dir, lower_dir); 586 585 fsstack_copy_inode_size(dir, lower_dir); 587 586 out: 588 - end_removing(lower_dentry); 587 + end_creating(lower_dentry); 589 588 if (d_really_is_negative(dentry)) 590 589 d_drop(dentry); 591 590 return rc;
+13 -6
fs/erofs/super.c
··· 644 644 * fs contexts (including its own) due to self-controlled RO 645 645 * accesses/contexts and no side-effect changes that need to 646 646 * context save & restore so it can reuse the current thread 647 - * context. However, it still needs to bump `s_stack_depth` to 648 - * avoid kernel stack overflow from nested filesystems. 647 + * context. 648 + * However, we still need to prevent kernel stack overflow due 649 + * to filesystem nesting: just ensure that s_stack_depth is 0 650 + * to disallow mounting EROFS on stacked filesystems. 651 + * Note: s_stack_depth is not incremented here for now, since 652 + * EROFS is the only fs supporting file-backed mounts for now. 653 + * It MUST change if another fs plans to support them, which 654 + * may also require adjusting FILESYSTEM_MAX_STACK_DEPTH. 649 655 */ 650 656 if (erofs_is_fileio_mode(sbi)) { 651 - sb->s_stack_depth = 652 - file_inode(sbi->dif0.file)->i_sb->s_stack_depth + 1; 653 - if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) { 654 - erofs_err(sb, "maximum fs stacking depth exceeded"); 657 + inode = file_inode(sbi->dif0.file); 658 + if ((inode->i_sb->s_op == &erofs_sops && 659 + !inode->i_sb->s_bdev) || 660 + inode->i_sb->s_stack_depth) { 661 + erofs_err(sb, "file-backed mounts cannot be applied to stacked fses"); 655 662 return -ENOTBLK; 656 663 } 657 664 }
+1 -1
fs/gfs2/lops.c
··· 484 484 new = bio_alloc(prev->bi_bdev, nr_iovecs, prev->bi_opf, GFP_NOIO); 485 485 bio_clone_blkg_association(new, prev); 486 486 new->bi_iter.bi_sector = bio_end_sector(prev); 487 - bio_chain(prev, new); 487 + bio_chain(new, prev); 488 488 submit_bio(prev); 489 489 return new; 490 490 }
+3
fs/inode.c
··· 1593 1593 * @hashval: hash value (usually inode number) to search for 1594 1594 * @test: callback used for comparisons between inodes 1595 1595 * @data: opaque data pointer to pass to @test 1596 + * @isnew: return argument telling whether I_NEW was set when 1597 + * the inode was found in hash (the caller needs to 1598 + * wait for I_NEW to clear) 1596 1599 * 1597 1600 * Search for the inode specified by @hashval and @data in the inode cache. 1598 1601 * If the inode is in the cache, the inode is returned with an incremented
+35 -15
fs/iomap/buffered-io.c
··· 832 832 if (!mapping_large_folio_support(iter->inode->i_mapping)) 833 833 len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); 834 834 835 - if (iter->fbatch) { 835 + if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) { 836 836 struct folio *folio = folio_batch_next(iter->fbatch); 837 837 838 838 if (!folio) ··· 929 929 * process so return and let the caller iterate and refill the batch. 930 930 */ 931 931 if (!folio) { 932 - WARN_ON_ONCE(!iter->fbatch); 932 + WARN_ON_ONCE(!(iter->iomap.flags & IOMAP_F_FOLIO_BATCH)); 933 933 return 0; 934 934 } 935 935 ··· 1544 1544 return status; 1545 1545 } 1546 1546 1547 - loff_t 1547 + /** 1548 + * iomap_fill_dirty_folios - fill a folio batch with dirty folios 1549 + * @iter: Iteration structure 1550 + * @start: Start offset of range. Updated based on lookup progress. 1551 + * @end: End offset of range 1552 + * @iomap_flags: Flags to set on the associated iomap to track the batch. 1553 + * 1554 + * Returns the folio count directly. Also returns the associated control flag if 1555 + * the the batch lookup is performed and the expected offset of a subsequent 1556 + * lookup via out params. The caller is responsible to set the flag on the 1557 + * associated iomap. 1558 + */ 1559 + unsigned int 1548 1560 iomap_fill_dirty_folios( 1549 1561 struct iomap_iter *iter, 1550 - loff_t offset, 1551 - loff_t length) 1562 + loff_t *start, 1563 + loff_t end, 1564 + unsigned int *iomap_flags) 1552 1565 { 1553 1566 struct address_space *mapping = iter->inode->i_mapping; 1554 - pgoff_t start = offset >> PAGE_SHIFT; 1555 - pgoff_t end = (offset + length - 1) >> PAGE_SHIFT; 1567 + pgoff_t pstart = *start >> PAGE_SHIFT; 1568 + pgoff_t pend = (end - 1) >> PAGE_SHIFT; 1569 + unsigned int count; 1556 1570 1557 - iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL); 1558 - if (!iter->fbatch) 1559 - return offset + length; 1560 - folio_batch_init(iter->fbatch); 1571 + if (!iter->fbatch) { 1572 + *start = end; 1573 + return 0; 1574 + } 1561 1575 1562 - filemap_get_folios_dirty(mapping, &start, end, iter->fbatch); 1563 - return (start << PAGE_SHIFT); 1576 + count = filemap_get_folios_dirty(mapping, &pstart, pend, iter->fbatch); 1577 + *start = (pstart << PAGE_SHIFT); 1578 + *iomap_flags |= IOMAP_F_FOLIO_BATCH; 1579 + return count; 1564 1580 } 1565 1581 EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios); 1566 1582 ··· 1585 1569 const struct iomap_ops *ops, 1586 1570 const struct iomap_write_ops *write_ops, void *private) 1587 1571 { 1572 + struct folio_batch fbatch; 1588 1573 struct iomap_iter iter = { 1589 1574 .inode = inode, 1590 1575 .pos = pos, 1591 1576 .len = len, 1592 1577 .flags = IOMAP_ZERO, 1593 1578 .private = private, 1579 + .fbatch = &fbatch, 1594 1580 }; 1595 1581 struct address_space *mapping = inode->i_mapping; 1596 1582 int ret; 1597 1583 bool range_dirty; 1584 + 1585 + folio_batch_init(&fbatch); 1598 1586 1599 1587 /* 1600 1588 * To avoid an unconditional flush, check pagecache state and only flush ··· 1610 1590 while ((ret = iomap_iter(&iter, ops)) > 0) { 1611 1591 const struct iomap *srcmap = iomap_iter_srcmap(&iter); 1612 1592 1613 - if (WARN_ON_ONCE(iter.fbatch && 1593 + if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) && 1614 1594 srcmap->type != IOMAP_UNWRITTEN)) 1615 1595 return -EIO; 1616 1596 1617 - if (!iter.fbatch && 1597 + if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) && 1618 1598 (srcmap->type == IOMAP_HOLE || 1619 1599 srcmap->type == IOMAP_UNWRITTEN)) { 1620 1600 s64 status;
+3 -3
fs/iomap/iter.c
··· 8 8 9 9 static inline void iomap_iter_reset_iomap(struct iomap_iter *iter) 10 10 { 11 - if (iter->fbatch) { 11 + if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) { 12 12 folio_batch_release(iter->fbatch); 13 - kfree(iter->fbatch); 14 - iter->fbatch = NULL; 13 + folio_batch_reinit(iter->fbatch); 14 + iter->iomap.flags &= ~IOMAP_F_FOLIO_BATCH; 15 15 } 16 16 17 17 iter->status = 0;
+2 -2
fs/jffs2/wbuf.c
··· 2 2 * JFFS2 -- Journalling Flash File System, Version 2. 3 3 * 4 4 * Copyright © 2001-2007 Red Hat, Inc. 5 - * Copyright © 2004 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright © 2004 Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Created by David Woodhouse <dwmw2@infradead.org> 8 - * Modified debugged and enhanced by Thomas Gleixner <tglx@linutronix.de> 8 + * Modified debugged and enhanced by Thomas Gleixner <tglx@kernel.org> 9 9 * 10 10 * For licensing information, see the file 'LICENCE' in this directory. 11 11 *
+61 -58
fs/locks.c
··· 369 369 while (!list_empty(dispose)) { 370 370 flc = list_first_entry(dispose, struct file_lock_core, flc_list); 371 371 list_del_init(&flc->flc_list); 372 - if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) 373 - locks_free_lease(file_lease(flc)); 374 - else 375 - locks_free_lock(file_lock(flc)); 372 + locks_free_lock(file_lock(flc)); 373 + } 374 + } 375 + 376 + static void 377 + lease_dispose_list(struct list_head *dispose) 378 + { 379 + struct file_lock_core *flc; 380 + 381 + while (!list_empty(dispose)) { 382 + flc = list_first_entry(dispose, struct file_lock_core, flc_list); 383 + list_del_init(&flc->flc_list); 384 + locks_free_lease(file_lease(flc)); 376 385 } 377 386 } 378 387 ··· 585 576 __f_setown(filp, task_pid(current), PIDTYPE_TGID, 0); 586 577 } 587 578 579 + /** 580 + * lease_open_conflict - see if the given file points to an inode that has 581 + * an existing open that would conflict with the 582 + * desired lease. 583 + * @filp: file to check 584 + * @arg: type of lease that we're trying to acquire 585 + * 586 + * Check to see if there's an existing open fd on this file that would 587 + * conflict with the lease we're trying to set. 588 + */ 589 + static int 590 + lease_open_conflict(struct file *filp, const int arg) 591 + { 592 + struct inode *inode = file_inode(filp); 593 + int self_wcount = 0, self_rcount = 0; 594 + 595 + if (arg == F_RDLCK) 596 + return inode_is_open_for_write(inode) ? -EAGAIN : 0; 597 + else if (arg != F_WRLCK) 598 + return 0; 599 + 600 + /* 601 + * Make sure that only read/write count is from lease requestor. 602 + * Note that this will result in denying write leases when i_writecount 603 + * is negative, which is what we want. (We shouldn't grant write leases 604 + * on files open for execution.) 605 + */ 606 + if (filp->f_mode & FMODE_WRITE) 607 + self_wcount = 1; 608 + else if (filp->f_mode & FMODE_READ) 609 + self_rcount = 1; 610 + 611 + if (atomic_read(&inode->i_writecount) != self_wcount || 612 + atomic_read(&inode->i_readcount) != self_rcount) 613 + return -EAGAIN; 614 + 615 + return 0; 616 + } 617 + 588 618 static const struct lease_manager_operations lease_manager_ops = { 589 619 .lm_break = lease_break_callback, 590 620 .lm_change = lease_modify, 591 621 .lm_setup = lease_setup, 622 + .lm_open_conflict = lease_open_conflict, 592 623 }; 593 624 594 625 /* ··· 1669 1620 spin_unlock(&ctx->flc_lock); 1670 1621 percpu_up_read(&file_rwsem); 1671 1622 1672 - locks_dispose_list(&dispose); 1623 + lease_dispose_list(&dispose); 1673 1624 error = wait_event_interruptible_timeout(new_fl->c.flc_wait, 1674 1625 list_empty(&new_fl->c.flc_blocked_member), 1675 1626 break_time); ··· 1692 1643 out: 1693 1644 spin_unlock(&ctx->flc_lock); 1694 1645 percpu_up_read(&file_rwsem); 1695 - locks_dispose_list(&dispose); 1646 + lease_dispose_list(&dispose); 1696 1647 free_lock: 1697 1648 locks_free_lease(new_fl); 1698 1649 return error; ··· 1776 1727 spin_unlock(&ctx->flc_lock); 1777 1728 percpu_up_read(&file_rwsem); 1778 1729 1779 - locks_dispose_list(&dispose); 1730 + lease_dispose_list(&dispose); 1780 1731 } 1781 1732 return type; 1782 1733 } ··· 1791 1742 if (deleg->d_flags != 0 || deleg->__pad != 0) 1792 1743 return -EINVAL; 1793 1744 deleg->d_type = __fcntl_getlease(filp, FL_DELEG); 1794 - return 0; 1795 - } 1796 - 1797 - /** 1798 - * check_conflicting_open - see if the given file points to an inode that has 1799 - * an existing open that would conflict with the 1800 - * desired lease. 1801 - * @filp: file to check 1802 - * @arg: type of lease that we're trying to acquire 1803 - * @flags: current lock flags 1804 - * 1805 - * Check to see if there's an existing open fd on this file that would 1806 - * conflict with the lease we're trying to set. 1807 - */ 1808 - static int 1809 - check_conflicting_open(struct file *filp, const int arg, int flags) 1810 - { 1811 - struct inode *inode = file_inode(filp); 1812 - int self_wcount = 0, self_rcount = 0; 1813 - 1814 - if (flags & FL_LAYOUT) 1815 - return 0; 1816 - if (flags & FL_DELEG) 1817 - /* We leave these checks to the caller */ 1818 - return 0; 1819 - 1820 - if (arg == F_RDLCK) 1821 - return inode_is_open_for_write(inode) ? -EAGAIN : 0; 1822 - else if (arg != F_WRLCK) 1823 - return 0; 1824 - 1825 - /* 1826 - * Make sure that only read/write count is from lease requestor. 1827 - * Note that this will result in denying write leases when i_writecount 1828 - * is negative, which is what we want. (We shouldn't grant write leases 1829 - * on files open for execution.) 1830 - */ 1831 - if (filp->f_mode & FMODE_WRITE) 1832 - self_wcount = 1; 1833 - else if (filp->f_mode & FMODE_READ) 1834 - self_rcount = 1; 1835 - 1836 - if (atomic_read(&inode->i_writecount) != self_wcount || 1837 - atomic_read(&inode->i_readcount) != self_rcount) 1838 - return -EAGAIN; 1839 - 1840 1745 return 0; 1841 1746 } 1842 1747 ··· 1830 1827 percpu_down_read(&file_rwsem); 1831 1828 spin_lock(&ctx->flc_lock); 1832 1829 time_out_leases(inode, &dispose); 1833 - error = check_conflicting_open(filp, arg, lease->c.flc_flags); 1830 + error = lease->fl_lmops->lm_open_conflict(filp, arg); 1834 1831 if (error) 1835 1832 goto out; 1836 1833 ··· 1887 1884 * precedes these checks. 1888 1885 */ 1889 1886 smp_mb(); 1890 - error = check_conflicting_open(filp, arg, lease->c.flc_flags); 1887 + error = lease->fl_lmops->lm_open_conflict(filp, arg); 1891 1888 if (error) { 1892 1889 locks_unlink_lock_ctx(&lease->c); 1893 1890 goto out; ··· 1899 1896 out: 1900 1897 spin_unlock(&ctx->flc_lock); 1901 1898 percpu_up_read(&file_rwsem); 1902 - locks_dispose_list(&dispose); 1899 + lease_dispose_list(&dispose); 1903 1900 if (is_deleg) 1904 1901 inode_unlock(inode); 1905 1902 if (!error && !my_fl) ··· 1935 1932 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); 1936 1933 spin_unlock(&ctx->flc_lock); 1937 1934 percpu_up_read(&file_rwsem); 1938 - locks_dispose_list(&dispose); 1935 + lease_dispose_list(&dispose); 1939 1936 return error; 1940 1937 } 1941 1938 ··· 2738 2735 spin_unlock(&ctx->flc_lock); 2739 2736 percpu_up_read(&file_rwsem); 2740 2737 2741 - locks_dispose_list(&dispose); 2738 + lease_dispose_list(&dispose); 2742 2739 } 2743 2740 2744 2741 /*
+15 -6
fs/namei.c
··· 830 830 static bool legitimize_links(struct nameidata *nd) 831 831 { 832 832 int i; 833 - if (unlikely(nd->flags & LOOKUP_CACHED)) { 834 - drop_links(nd); 835 - nd->depth = 0; 836 - return false; 837 - } 833 + 834 + VFS_BUG_ON(nd->flags & LOOKUP_CACHED); 835 + 838 836 for (i = 0; i < nd->depth; i++) { 839 837 struct saved *last = nd->stack + i; 840 838 if (unlikely(!legitimize_path(nd, &last->link, last->seq))) { ··· 881 883 882 884 BUG_ON(!(nd->flags & LOOKUP_RCU)); 883 885 886 + if (unlikely(nd->flags & LOOKUP_CACHED)) { 887 + drop_links(nd); 888 + nd->depth = 0; 889 + goto out1; 890 + } 884 891 if (unlikely(nd->depth && !legitimize_links(nd))) 885 892 goto out1; 886 893 if (unlikely(!legitimize_path(nd, &nd->path, nd->seq))) ··· 921 918 int res; 922 919 BUG_ON(!(nd->flags & LOOKUP_RCU)); 923 920 921 + if (unlikely(nd->flags & LOOKUP_CACHED)) { 922 + drop_links(nd); 923 + nd->depth = 0; 924 + goto out2; 925 + } 924 926 if (unlikely(nd->depth && !legitimize_links(nd))) 925 927 goto out2; 926 928 res = __legitimize_mnt(nd->path.mnt, nd->m_seq); ··· 2844 2836 } 2845 2837 2846 2838 /** 2847 - * start_dirop - begin a create or remove dirop, performing locking and lookup 2839 + * __start_dirop - begin a create or remove dirop, performing locking and lookup 2848 2840 * @parent: the dentry of the parent in which the operation will occur 2849 2841 * @name: a qstr holding the name within that parent 2850 2842 * @lookup_flags: intent and other lookup flags. 2843 + * @state: task state bitmask 2851 2844 * 2852 2845 * The lookup is performed and necessary locks are taken so that, on success, 2853 2846 * the returned dentry can be operated on safely.
+1 -1
fs/netfs/read_collect.c
··· 137 137 rreq->front_folio_order = order; 138 138 fsize = PAGE_SIZE << order; 139 139 fpos = folio_pos(folio); 140 - fend = umin(fpos + fsize, rreq->i_size); 140 + fend = fpos + fsize; 141 141 142 142 trace_netfs_collect_folio(rreq, folio, fend, collected_to); 143 143
+21 -2
fs/nfsd/nfs4layouts.c
··· 764 764 return lease_modify(onlist, arg, dispose); 765 765 } 766 766 767 + /** 768 + * nfsd4_layout_lm_open_conflict - see if the given file points to an inode that has 769 + * an existing open that would conflict with the 770 + * desired lease. 771 + * @filp: file to check 772 + * @arg: type of lease that we're trying to acquire 773 + * 774 + * The kernel will call into this operation to determine whether there 775 + * are conflicting opens that may prevent the layout from being granted. 776 + * For nfsd, that check is done at a higher level, so this trivially 777 + * returns 0. 778 + */ 779 + static int 780 + nfsd4_layout_lm_open_conflict(struct file *filp, int arg) 781 + { 782 + return 0; 783 + } 784 + 767 785 static const struct lease_manager_operations nfsd4_layouts_lm_ops = { 768 - .lm_break = nfsd4_layout_lm_break, 769 - .lm_change = nfsd4_layout_lm_change, 786 + .lm_break = nfsd4_layout_lm_break, 787 + .lm_change = nfsd4_layout_lm_change, 788 + .lm_open_conflict = nfsd4_layout_lm_open_conflict, 770 789 }; 771 790 772 791 int
+19
fs/nfsd/nfs4state.c
··· 5555 5555 return -EAGAIN; 5556 5556 } 5557 5557 5558 + /** 5559 + * nfsd4_deleg_lm_open_conflict - see if the given file points to an inode that has 5560 + * an existing open that would conflict with the 5561 + * desired lease. 5562 + * @filp: file to check 5563 + * @arg: type of lease that we're trying to acquire 5564 + * 5565 + * The kernel will call into this operation to determine whether there 5566 + * are conflicting opens that may prevent the deleg from being granted. 5567 + * For nfsd, that check is done at a higher level, so this trivially 5568 + * returns 0. 5569 + */ 5570 + static int 5571 + nfsd4_deleg_lm_open_conflict(struct file *filp, int arg) 5572 + { 5573 + return 0; 5574 + } 5575 + 5558 5576 static const struct lease_manager_operations nfsd_lease_mng_ops = { 5559 5577 .lm_breaker_owns_lease = nfsd_breaker_owns_lease, 5560 5578 .lm_break = nfsd_break_deleg_cb, 5561 5579 .lm_change = nfsd_change_deleg_cb, 5580 + .lm_open_conflict = nfsd4_deleg_lm_open_conflict, 5562 5581 }; 5563 5582 5564 5583 static __be32 nfsd4_check_seqid(struct nfsd4_compound_state *cstate, struct nfs4_stateowner *so, u32 seqid)
+18
fs/pidfs.c
··· 517 517 switch (cmd) { 518 518 /* Namespaces that hang of nsproxy. */ 519 519 case PIDFD_GET_CGROUP_NAMESPACE: 520 + #ifdef CONFIG_CGROUPS 520 521 if (!ns_ref_get(nsp->cgroup_ns)) 521 522 break; 522 523 ns_common = to_ns_common(nsp->cgroup_ns); 524 + #endif 523 525 break; 524 526 case PIDFD_GET_IPC_NAMESPACE: 527 + #ifdef CONFIG_IPC_NS 525 528 if (!ns_ref_get(nsp->ipc_ns)) 526 529 break; 527 530 ns_common = to_ns_common(nsp->ipc_ns); 531 + #endif 528 532 break; 529 533 case PIDFD_GET_MNT_NAMESPACE: 530 534 if (!ns_ref_get(nsp->mnt_ns)) ··· 536 532 ns_common = to_ns_common(nsp->mnt_ns); 537 533 break; 538 534 case PIDFD_GET_NET_NAMESPACE: 535 + #ifdef CONFIG_NET_NS 539 536 if (!ns_ref_get(nsp->net_ns)) 540 537 break; 541 538 ns_common = to_ns_common(nsp->net_ns); 539 + #endif 542 540 break; 543 541 case PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE: 542 + #ifdef CONFIG_PID_NS 544 543 if (!ns_ref_get(nsp->pid_ns_for_children)) 545 544 break; 546 545 ns_common = to_ns_common(nsp->pid_ns_for_children); 546 + #endif 547 547 break; 548 548 case PIDFD_GET_TIME_NAMESPACE: 549 + #ifdef CONFIG_TIME_NS 549 550 if (!ns_ref_get(nsp->time_ns)) 550 551 break; 551 552 ns_common = to_ns_common(nsp->time_ns); 553 + #endif 552 554 break; 553 555 case PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE: 556 + #ifdef CONFIG_TIME_NS 554 557 if (!ns_ref_get(nsp->time_ns_for_children)) 555 558 break; 556 559 ns_common = to_ns_common(nsp->time_ns_for_children); 560 + #endif 557 561 break; 558 562 case PIDFD_GET_UTS_NAMESPACE: 563 + #ifdef CONFIG_UTS_NS 559 564 if (!ns_ref_get(nsp->uts_ns)) 560 565 break; 561 566 ns_common = to_ns_common(nsp->uts_ns); 567 + #endif 562 568 break; 563 569 /* Namespaces that don't hang of nsproxy. */ 564 570 case PIDFD_GET_USER_NAMESPACE: 571 + #ifdef CONFIG_USER_NS 565 572 scoped_guard(rcu) { 566 573 struct user_namespace *user_ns; 567 574 ··· 581 566 break; 582 567 ns_common = to_ns_common(user_ns); 583 568 } 569 + #endif 584 570 break; 585 571 case PIDFD_GET_PID_NAMESPACE: 572 + #ifdef CONFIG_PID_NS 586 573 scoped_guard(rcu) { 587 574 struct pid_namespace *pid_ns; 588 575 ··· 593 576 break; 594 577 ns_common = to_ns_common(pid_ns); 595 578 } 579 + #endif 596 580 break; 597 581 default: 598 582 return -ENOIOCTLCMD;
+6 -5
fs/xfs/xfs_iomap.c
··· 1831 1831 */ 1832 1832 if (flags & IOMAP_ZERO) { 1833 1833 xfs_fileoff_t eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)); 1834 - u64 end; 1835 1834 1836 1835 if (isnullstartblock(imap.br_startblock) && 1837 1836 offset_fsb >= eof_fsb) ··· 1850 1851 */ 1851 1852 if (imap.br_state == XFS_EXT_UNWRITTEN && 1852 1853 offset_fsb < eof_fsb) { 1853 - loff_t len = min(count, 1854 - XFS_FSB_TO_B(mp, imap.br_blockcount)); 1854 + loff_t foffset = offset, fend; 1855 1855 1856 - end = iomap_fill_dirty_folios(iter, offset, len); 1856 + fend = offset + 1857 + min(count, XFS_FSB_TO_B(mp, imap.br_blockcount)); 1858 + iomap_fill_dirty_folios(iter, &foffset, fend, 1859 + &iomap_flags); 1857 1860 end_fsb = min_t(xfs_fileoff_t, end_fsb, 1858 - XFS_B_TO_FSB(mp, end)); 1861 + XFS_B_TO_FSB(mp, foffset)); 1859 1862 } 1860 1863 1861 1864 xfs_trim_extent(&imap, offset_fsb, end_fsb - offset_fsb);
+1 -1
include/acpi/acpi_drivers.h
··· 51 51 52 52 int acpi_irq_penalty_init(void); 53 53 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering, 54 - int *polarity, char **name); 54 + int *polarity, char **name, u32 *gsi); 55 55 int acpi_pci_link_free_irq(acpi_handle handle); 56 56 57 57 /* ACPI PCI Device Binding */
+22
include/drm/drm_atomic_helper.h
··· 60 60 int drm_atomic_helper_check_planes(struct drm_device *dev, 61 61 struct drm_atomic_state *state); 62 62 int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 63 + void drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 64 + struct drm_atomic_state *state); 65 + void drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, 66 + struct drm_atomic_state *state); 67 + void drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, 68 + struct drm_atomic_state *state); 63 69 int drm_atomic_helper_check(struct drm_device *dev, 64 70 struct drm_atomic_state *state); 65 71 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state); ··· 95 89 void 96 90 drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state); 97 91 92 + void drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, 93 + struct drm_atomic_state *state); 94 + 98 95 void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev, 99 96 struct drm_atomic_state *state); 97 + 98 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 99 + struct drm_atomic_state *state); 100 + 101 + void drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, 102 + struct drm_atomic_state *state); 103 + 104 + void drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, 105 + struct drm_atomic_state *state); 106 + 107 + void drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, 108 + struct drm_atomic_state *state); 109 + 100 110 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 101 111 struct drm_atomic_state *old_state); 102 112
+66 -183
include/drm/drm_bridge.h
··· 176 176 /** 177 177 * @disable: 178 178 * 179 - * The @disable callback should disable the bridge. 179 + * This callback should disable the bridge. It is called right before 180 + * the preceding element in the display pipe is disabled. If the 181 + * preceding element is a bridge this means it's called before that 182 + * bridge's @disable vfunc. If the preceding element is a &drm_encoder 183 + * it's called right before the &drm_encoder_helper_funcs.disable, 184 + * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms 185 + * hook. 180 186 * 181 187 * The bridge can assume that the display pipe (i.e. clocks and timing 182 188 * signals) feeding it is still running when this callback is called. 183 - * 184 - * 185 - * If the preceding element is a &drm_bridge, then this is called before 186 - * that bridge is disabled via one of: 187 - * 188 - * - &drm_bridge_funcs.disable 189 - * - &drm_bridge_funcs.atomic_disable 190 - * 191 - * If the preceding element of the bridge is a display controller, then 192 - * this callback is called before the encoder is disabled via one of: 193 - * 194 - * - &drm_encoder_helper_funcs.atomic_disable 195 - * - &drm_encoder_helper_funcs.prepare 196 - * - &drm_encoder_helper_funcs.disable 197 - * - &drm_encoder_helper_funcs.dpms 198 - * 199 - * and the CRTC is disabled via one of: 200 - * 201 - * - &drm_crtc_helper_funcs.prepare 202 - * - &drm_crtc_helper_funcs.atomic_disable 203 - * - &drm_crtc_helper_funcs.disable 204 - * - &drm_crtc_helper_funcs.dpms. 205 189 * 206 190 * The @disable callback is optional. 207 191 * ··· 199 215 /** 200 216 * @post_disable: 201 217 * 218 + * This callback should disable the bridge. It is called right after the 219 + * preceding element in the display pipe is disabled. If the preceding 220 + * element is a bridge this means it's called after that bridge's 221 + * @post_disable function. If the preceding element is a &drm_encoder 222 + * it's called right after the encoder's 223 + * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare 224 + * or &drm_encoder_helper_funcs.dpms hook. 225 + * 202 226 * The bridge must assume that the display pipe (i.e. clocks and timing 203 - * signals) feeding this bridge is no longer running when the 204 - * @post_disable is called. 205 - * 206 - * This callback should perform all the actions required by the hardware 207 - * after it has stopped receiving signals from the preceding element. 208 - * 209 - * If the preceding element is a &drm_bridge, then this is called after 210 - * that bridge is post-disabled (unless marked otherwise by the 211 - * @pre_enable_prev_first flag) via one of: 212 - * 213 - * - &drm_bridge_funcs.post_disable 214 - * - &drm_bridge_funcs.atomic_post_disable 215 - * 216 - * If the preceding element of the bridge is a display controller, then 217 - * this callback is called after the encoder is disabled via one of: 218 - * 219 - * - &drm_encoder_helper_funcs.atomic_disable 220 - * - &drm_encoder_helper_funcs.prepare 221 - * - &drm_encoder_helper_funcs.disable 222 - * - &drm_encoder_helper_funcs.dpms 223 - * 224 - * and the CRTC is disabled via one of: 225 - * 226 - * - &drm_crtc_helper_funcs.prepare 227 - * - &drm_crtc_helper_funcs.atomic_disable 228 - * - &drm_crtc_helper_funcs.disable 229 - * - &drm_crtc_helper_funcs.dpms 227 + * signals) feeding it is no longer running when this callback is 228 + * called. 230 229 * 231 230 * The @post_disable callback is optional. 232 231 * ··· 252 285 /** 253 286 * @pre_enable: 254 287 * 288 + * This callback should enable the bridge. It is called right before 289 + * the preceding element in the display pipe is enabled. If the 290 + * preceding element is a bridge this means it's called before that 291 + * bridge's @pre_enable function. If the preceding element is a 292 + * &drm_encoder it's called right before the encoder's 293 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 294 + * &drm_encoder_helper_funcs.dpms hook. 295 + * 255 296 * The display pipe (i.e. clocks and timing signals) feeding this bridge 256 - * will not yet be running when the @pre_enable is called. 257 - * 258 - * This callback should perform all the necessary actions to prepare the 259 - * bridge to accept signals from the preceding element. 260 - * 261 - * If the preceding element is a &drm_bridge, then this is called before 262 - * that bridge is pre-enabled (unless marked otherwise by 263 - * @pre_enable_prev_first flag) via one of: 264 - * 265 - * - &drm_bridge_funcs.pre_enable 266 - * - &drm_bridge_funcs.atomic_pre_enable 267 - * 268 - * If the preceding element of the bridge is a display controller, then 269 - * this callback is called before the CRTC is enabled via one of: 270 - * 271 - * - &drm_crtc_helper_funcs.atomic_enable 272 - * - &drm_crtc_helper_funcs.commit 273 - * 274 - * and the encoder is enabled via one of: 275 - * 276 - * - &drm_encoder_helper_funcs.atomic_enable 277 - * - &drm_encoder_helper_funcs.enable 278 - * - &drm_encoder_helper_funcs.commit 297 + * will not yet be running when this callback is called. The bridge must 298 + * not enable the display link feeding the next bridge in the chain (if 299 + * there is one) when this callback is called. 279 300 * 280 301 * The @pre_enable callback is optional. 281 302 * ··· 277 322 /** 278 323 * @enable: 279 324 * 280 - * The @enable callback should enable the bridge. 325 + * This callback should enable the bridge. It is called right after 326 + * the preceding element in the display pipe is enabled. If the 327 + * preceding element is a bridge this means it's called after that 328 + * bridge's @enable function. If the preceding element is a 329 + * &drm_encoder it's called right after the encoder's 330 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 331 + * &drm_encoder_helper_funcs.dpms hook. 281 332 * 282 333 * The bridge can assume that the display pipe (i.e. clocks and timing 283 334 * signals) feeding it is running when this callback is called. This 284 335 * callback must enable the display link feeding the next bridge in the 285 336 * chain if there is one. 286 - * 287 - * If the preceding element is a &drm_bridge, then this is called after 288 - * that bridge is enabled via one of: 289 - * 290 - * - &drm_bridge_funcs.enable 291 - * - &drm_bridge_funcs.atomic_enable 292 - * 293 - * If the preceding element of the bridge is a display controller, then 294 - * this callback is called after the CRTC is enabled via one of: 295 - * 296 - * - &drm_crtc_helper_funcs.atomic_enable 297 - * - &drm_crtc_helper_funcs.commit 298 - * 299 - * and the encoder is enabled via one of: 300 - * 301 - * - &drm_encoder_helper_funcs.atomic_enable 302 - * - &drm_encoder_helper_funcs.enable 303 - * - drm_encoder_helper_funcs.commit 304 337 * 305 338 * The @enable callback is optional. 306 339 * ··· 302 359 /** 303 360 * @atomic_pre_enable: 304 361 * 362 + * This callback should enable the bridge. It is called right before 363 + * the preceding element in the display pipe is enabled. If the 364 + * preceding element is a bridge this means it's called before that 365 + * bridge's @atomic_pre_enable or @pre_enable function. If the preceding 366 + * element is a &drm_encoder it's called right before the encoder's 367 + * &drm_encoder_helper_funcs.atomic_enable hook. 368 + * 305 369 * The display pipe (i.e. clocks and timing signals) feeding this bridge 306 - * will not yet be running when the @atomic_pre_enable is called. 307 - * 308 - * This callback should perform all the necessary actions to prepare the 309 - * bridge to accept signals from the preceding element. 310 - * 311 - * If the preceding element is a &drm_bridge, then this is called before 312 - * that bridge is pre-enabled (unless marked otherwise by 313 - * @pre_enable_prev_first flag) via one of: 314 - * 315 - * - &drm_bridge_funcs.pre_enable 316 - * - &drm_bridge_funcs.atomic_pre_enable 317 - * 318 - * If the preceding element of the bridge is a display controller, then 319 - * this callback is called before the CRTC is enabled via one of: 320 - * 321 - * - &drm_crtc_helper_funcs.atomic_enable 322 - * - &drm_crtc_helper_funcs.commit 323 - * 324 - * and the encoder is enabled via one of: 325 - * 326 - * - &drm_encoder_helper_funcs.atomic_enable 327 - * - &drm_encoder_helper_funcs.enable 328 - * - &drm_encoder_helper_funcs.commit 370 + * will not yet be running when this callback is called. The bridge must 371 + * not enable the display link feeding the next bridge in the chain (if 372 + * there is one) when this callback is called. 329 373 * 330 374 * The @atomic_pre_enable callback is optional. 331 375 */ ··· 322 392 /** 323 393 * @atomic_enable: 324 394 * 325 - * The @atomic_enable callback should enable the bridge. 395 + * This callback should enable the bridge. It is called right after 396 + * the preceding element in the display pipe is enabled. If the 397 + * preceding element is a bridge this means it's called after that 398 + * bridge's @atomic_enable or @enable function. If the preceding element 399 + * is a &drm_encoder it's called right after the encoder's 400 + * &drm_encoder_helper_funcs.atomic_enable hook. 326 401 * 327 402 * The bridge can assume that the display pipe (i.e. clocks and timing 328 403 * signals) feeding it is running when this callback is called. This 329 404 * callback must enable the display link feeding the next bridge in the 330 405 * chain if there is one. 331 - * 332 - * If the preceding element is a &drm_bridge, then this is called after 333 - * that bridge is enabled via one of: 334 - * 335 - * - &drm_bridge_funcs.enable 336 - * - &drm_bridge_funcs.atomic_enable 337 - * 338 - * If the preceding element of the bridge is a display controller, then 339 - * this callback is called after the CRTC is enabled via one of: 340 - * 341 - * - &drm_crtc_helper_funcs.atomic_enable 342 - * - &drm_crtc_helper_funcs.commit 343 - * 344 - * and the encoder is enabled via one of: 345 - * 346 - * - &drm_encoder_helper_funcs.atomic_enable 347 - * - &drm_encoder_helper_funcs.enable 348 - * - drm_encoder_helper_funcs.commit 349 406 * 350 407 * The @atomic_enable callback is optional. 351 408 */ ··· 341 424 /** 342 425 * @atomic_disable: 343 426 * 344 - * The @atomic_disable callback should disable the bridge. 427 + * This callback should disable the bridge. It is called right before 428 + * the preceding element in the display pipe is disabled. If the 429 + * preceding element is a bridge this means it's called before that 430 + * bridge's @atomic_disable or @disable vfunc. If the preceding element 431 + * is a &drm_encoder it's called right before the 432 + * &drm_encoder_helper_funcs.atomic_disable hook. 345 433 * 346 434 * The bridge can assume that the display pipe (i.e. clocks and timing 347 435 * signals) feeding it is still running when this callback is called. 348 - * 349 - * If the preceding element is a &drm_bridge, then this is called before 350 - * that bridge is disabled via one of: 351 - * 352 - * - &drm_bridge_funcs.disable 353 - * - &drm_bridge_funcs.atomic_disable 354 - * 355 - * If the preceding element of the bridge is a display controller, then 356 - * this callback is called before the encoder is disabled via one of: 357 - * 358 - * - &drm_encoder_helper_funcs.atomic_disable 359 - * - &drm_encoder_helper_funcs.prepare 360 - * - &drm_encoder_helper_funcs.disable 361 - * - &drm_encoder_helper_funcs.dpms 362 - * 363 - * and the CRTC is disabled via one of: 364 - * 365 - * - &drm_crtc_helper_funcs.prepare 366 - * - &drm_crtc_helper_funcs.atomic_disable 367 - * - &drm_crtc_helper_funcs.disable 368 - * - &drm_crtc_helper_funcs.dpms. 369 436 * 370 437 * The @atomic_disable callback is optional. 371 438 */ ··· 359 458 /** 360 459 * @atomic_post_disable: 361 460 * 461 + * This callback should disable the bridge. It is called right after the 462 + * preceding element in the display pipe is disabled. If the preceding 463 + * element is a bridge this means it's called after that bridge's 464 + * @atomic_post_disable or @post_disable function. If the preceding 465 + * element is a &drm_encoder it's called right after the encoder's 466 + * &drm_encoder_helper_funcs.atomic_disable hook. 467 + * 362 468 * The bridge must assume that the display pipe (i.e. clocks and timing 363 - * signals) feeding this bridge is no longer running when the 364 - * @atomic_post_disable is called. 365 - * 366 - * This callback should perform all the actions required by the hardware 367 - * after it has stopped receiving signals from the preceding element. 368 - * 369 - * If the preceding element is a &drm_bridge, then this is called after 370 - * that bridge is post-disabled (unless marked otherwise by the 371 - * @pre_enable_prev_first flag) via one of: 372 - * 373 - * - &drm_bridge_funcs.post_disable 374 - * - &drm_bridge_funcs.atomic_post_disable 375 - * 376 - * If the preceding element of the bridge is a display controller, then 377 - * this callback is called after the encoder is disabled via one of: 378 - * 379 - * - &drm_encoder_helper_funcs.atomic_disable 380 - * - &drm_encoder_helper_funcs.prepare 381 - * - &drm_encoder_helper_funcs.disable 382 - * - &drm_encoder_helper_funcs.dpms 383 - * 384 - * and the CRTC is disabled via one of: 385 - * 386 - * - &drm_crtc_helper_funcs.prepare 387 - * - &drm_crtc_helper_funcs.atomic_disable 388 - * - &drm_crtc_helper_funcs.disable 389 - * - &drm_crtc_helper_funcs.dpms 469 + * signals) feeding it is no longer running when this callback is 470 + * called. 390 471 * 391 472 * The @atomic_post_disable callback is optional. 392 473 */
+5 -2
include/hyperv/hvgdk_mini.h
··· 578 578 struct hv_tlb_flush_ex { 579 579 u64 address_space; 580 580 u64 flags; 581 - struct hv_vpset hv_vp_set; 582 - u64 gva_list[]; 581 + __TRAILING_OVERLAP(struct hv_vpset, hv_vp_set, bank_contents, __packed, 582 + u64 gva_list[]; 583 + ); 583 584 } __packed; 585 + static_assert(offsetof(struct hv_tlb_flush_ex, hv_vp_set.bank_contents) == 586 + offsetof(struct hv_tlb_flush_ex, gva_list)); 584 587 585 588 struct ms_hyperv_tsc_page { /* HV_REFERENCE_TSC_PAGE */ 586 589 volatile u32 tsc_sequence;
+24
include/linux/can/can-ml.h
··· 46 46 #include <linux/list.h> 47 47 #include <linux/netdevice.h> 48 48 49 + /* exposed CAN device capabilities for network layer */ 50 + #define CAN_CAP_CC BIT(0) /* CAN CC aka Classical CAN */ 51 + #define CAN_CAP_FD BIT(1) /* CAN FD */ 52 + #define CAN_CAP_XL BIT(2) /* CAN XL */ 53 + #define CAN_CAP_RO BIT(3) /* read-only mode (LISTEN/RESTRICTED) */ 54 + 49 55 #define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS) 50 56 #define CAN_EFF_RCV_HASH_BITS 10 51 57 #define CAN_EFF_RCV_ARRAY_SZ (1 << CAN_EFF_RCV_HASH_BITS) ··· 70 64 #ifdef CAN_J1939 71 65 struct j1939_priv *j1939_priv; 72 66 #endif 67 + u32 can_cap; 73 68 }; 74 69 75 70 static inline struct can_ml_priv *can_get_ml_priv(struct net_device *dev) ··· 82 75 struct can_ml_priv *ml_priv) 83 76 { 84 77 netdev_set_ml_priv(dev, ml_priv, ML_PRIV_CAN); 78 + } 79 + 80 + static inline bool can_cap_enabled(struct net_device *dev, u32 cap) 81 + { 82 + struct can_ml_priv *can_ml = can_get_ml_priv(dev); 83 + 84 + if (!can_ml) 85 + return false; 86 + 87 + return (can_ml->can_cap & cap); 88 + } 89 + 90 + static inline void can_set_cap(struct net_device *dev, u32 cap) 91 + { 92 + struct can_ml_priv *can_ml = can_get_ml_priv(dev); 93 + 94 + can_ml->can_cap = cap; 85 95 } 86 96 87 97 #endif /* CAN_ML_H */
+1 -7
include/linux/can/dev.h
··· 111 111 void free_candev(struct net_device *dev); 112 112 113 113 /* a candev safe wrapper around netdev_priv */ 114 - #if IS_ENABLED(CONFIG_CAN_NETLINK) 115 114 struct can_priv *safe_candev_priv(struct net_device *dev); 116 - #else 117 - static inline struct can_priv *safe_candev_priv(struct net_device *dev) 118 - { 119 - return NULL; 120 - } 121 - #endif 122 115 123 116 int open_candev(struct net_device *dev); 124 117 void close_candev(struct net_device *dev); 125 118 void can_set_default_mtu(struct net_device *dev); 119 + void can_set_cap_info(struct net_device *dev); 126 120 int __must_check can_set_static_ctrlmode(struct net_device *dev, 127 121 u32 static_mode); 128 122 int can_hwtstamp_get(struct net_device *netdev,
+14 -11
include/linux/cgroup-defs.h
··· 626 626 #endif 627 627 628 628 /* All ancestors including self */ 629 - struct cgroup *ancestors[]; 629 + union { 630 + DECLARE_FLEX_ARRAY(struct cgroup *, ancestors); 631 + struct { 632 + struct cgroup *_root_ancestor; 633 + DECLARE_FLEX_ARRAY(struct cgroup *, _low_ancestors); 634 + }; 635 + }; 630 636 }; 631 637 632 638 /* ··· 653 647 struct list_head root_list; 654 648 struct rcu_head rcu; /* Must be near the top */ 655 649 656 - /* 657 - * The root cgroup. The containing cgroup_root will be destroyed on its 658 - * release. cgrp->ancestors[0] will be used overflowing into the 659 - * following field. cgrp_ancestor_storage must immediately follow. 660 - */ 661 - struct cgroup cgrp; 662 - 663 - /* must follow cgrp for cgrp->ancestors[0], see above */ 664 - struct cgroup *cgrp_ancestor_storage; 665 - 666 650 /* Number of cgroups in the hierarchy, used only for /proc/cgroups */ 667 651 atomic_t nr_cgrps; 668 652 ··· 664 668 665 669 /* The name for this hierarchy - may be empty */ 666 670 char name[MAX_CGROUP_ROOT_NAMELEN]; 671 + 672 + /* 673 + * The root cgroup. The containing cgroup_root will be destroyed on its 674 + * release. This must be embedded last due to flexible array at the end 675 + * of struct cgroup. 676 + */ 677 + struct cgroup cgrp; 667 678 }; 668 679 669 680 /*
+1
include/linux/filelock.h
··· 49 49 int (*lm_change)(struct file_lease *, int, struct list_head *); 50 50 void (*lm_setup)(struct file_lease *, void **); 51 51 bool (*lm_breaker_owns_lease)(struct file_lease *); 52 + int (*lm_open_conflict)(struct file *, int); 52 53 }; 53 54 54 55 struct lock_manager {
+1 -1
include/linux/ftrace.h
··· 1167 1167 */ 1168 1168 struct ftrace_graph_ent { 1169 1169 unsigned long func; /* Current function */ 1170 - unsigned long depth; 1170 + long depth; /* signed to check for less than zero */ 1171 1171 } __packed; 1172 1172 1173 1173 /*
+1 -1
include/linux/hrtimer.h
··· 2 2 /* 3 3 * hrtimers - High-resolution kernel timers 4 4 * 5 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar 7 7 * 8 8 * data type definitions, declarations, prototypes
+6 -2
include/linux/iomap.h
··· 88 88 /* 89 89 * Flags set by the core iomap code during operations: 90 90 * 91 + * IOMAP_F_FOLIO_BATCH indicates that the folio batch mechanism is active 92 + * for this operation, set by iomap_fill_dirty_folios(). 93 + * 91 94 * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size 92 95 * has changed as the result of this write operation. 93 96 * ··· 98 95 * range it covers needs to be remapped by the high level before the operation 99 96 * can proceed. 100 97 */ 98 + #define IOMAP_F_FOLIO_BATCH (1U << 13) 101 99 #define IOMAP_F_SIZE_CHANGED (1U << 14) 102 100 #define IOMAP_F_STALE (1U << 15) 103 101 ··· 356 352 int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, 357 353 const struct iomap_ops *ops, 358 354 const struct iomap_write_ops *write_ops); 359 - loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset, 360 - loff_t length); 355 + unsigned int iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t *start, 356 + loff_t end, unsigned int *iomap_flags); 361 357 int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, 362 358 bool *did_zero, const struct iomap_ops *ops, 363 359 const struct iomap_write_ops *write_ops, void *private);
+1 -1
include/linux/ktime.h
··· 3 3 * 4 4 * ktime_t - nanosecond-resolution time format. 5 5 * 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar 8 8 * 9 9 * data type definitions, declarations, prototypes and macros.
+1 -1
include/linux/mtd/jedec.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all JEDEC related definitions 8 8 */
+1 -1
include/linux/mtd/nand-ecc-sw-hamming.h
··· 2 2 /* 3 3 * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com> 4 4 * David Woodhouse <dwmw2@infradead.org> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * This file is the header for the NAND Hamming ECC implementation. 8 8 */
+1 -1
include/linux/mtd/ndfc.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2006 Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright (c) 2006 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * 5 5 * Info: 6 6 * Contains defines, datastructures for ndfc nand controller
+1 -1
include/linux/mtd/onfi.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all ONFI related definitions 8 8 */
+1 -1
include/linux/mtd/platnand.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all platform NAND related definitions. 8 8 */
+1 -1
include/linux/mtd/rawnand.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Info: 8 8 * Contains standard defines and IDs for NAND flash devices
+1 -1
include/linux/perf_event.h
··· 1 1 /* 2 2 * Performance events: 3 3 * 4 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 7 7 *
+1 -1
include/linux/plist.h
··· 8 8 * 2001-2005 (c) MontaVista Software, Inc. 9 9 * Daniel Walker <dwalker@mvista.com> 10 10 * 11 - * (C) 2005 Thomas Gleixner <tglx@linutronix.de> 11 + * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 12 12 * 13 13 * Simplifications of the original code by 14 14 * Oleg Nesterov <oleg@tv-sign.ru>
+1 -1
include/linux/rslib.h
··· 2 2 /* 3 3 * Generic Reed Solomon encoder / decoder library 4 4 * 5 - * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de) 5 + * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org) 6 6 * 7 7 * RS code lifted from reed solomon library written by Phil Karn 8 8 * Copyright 2002 Phil Karn, KA9Q
+2 -2
include/linux/soc/airoha/airoha_offload.h
··· 52 52 { 53 53 } 54 54 55 - static inline int airoha_ppe_setup_tc_block_cb(struct airoha_ppe_dev *dev, 56 - void *type_data) 55 + static inline int airoha_ppe_dev_setup_tc_block_cb(struct airoha_ppe_dev *dev, 56 + void *type_data) 57 57 { 58 58 return -EOPNOTSUPP; 59 59 }
+9
include/linux/trace_recursion.h
··· 34 34 TRACE_INTERNAL_SIRQ_BIT, 35 35 TRACE_INTERNAL_TRANSITION_BIT, 36 36 37 + /* Internal event use recursion bits */ 38 + TRACE_INTERNAL_EVENT_BIT, 39 + TRACE_INTERNAL_EVENT_NMI_BIT, 40 + TRACE_INTERNAL_EVENT_IRQ_BIT, 41 + TRACE_INTERNAL_EVENT_SIRQ_BIT, 42 + TRACE_INTERNAL_EVENT_TRANSITION_BIT, 43 + 37 44 TRACE_BRANCH_BIT, 38 45 /* 39 46 * Abuse of the trace_recursion. ··· 64 57 #define TRACE_FTRACE_START TRACE_FTRACE_BIT 65 58 66 59 #define TRACE_LIST_START TRACE_INTERNAL_BIT 60 + 61 + #define TRACE_EVENT_START TRACE_INTERNAL_EVENT_BIT 67 62 68 63 #define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1) 69 64
+1 -1
include/linux/uio_driver.h
··· 3 3 * include/linux/uio_driver.h 4 4 * 5 5 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de> 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de> 8 8 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com> 9 9 *
+6
include/net/dropreason-core.h
··· 67 67 FN(TC_EGRESS) \ 68 68 FN(SECURITY_HOOK) \ 69 69 FN(QDISC_DROP) \ 70 + FN(QDISC_BURST_DROP) \ 70 71 FN(QDISC_OVERLIMIT) \ 71 72 FN(QDISC_CONGESTED) \ 72 73 FN(CAKE_FLOOD) \ ··· 375 374 * failed to enqueue to current qdisc) 376 375 */ 377 376 SKB_DROP_REASON_QDISC_DROP, 377 + /** 378 + * @SKB_DROP_REASON_QDISC_BURST_DROP: dropped when net.core.qdisc_max_burst 379 + * limit is hit. 380 + */ 381 + SKB_DROP_REASON_QDISC_BURST_DROP, 378 382 /** 379 383 * @SKB_DROP_REASON_QDISC_OVERLIMIT: dropped by qdisc when a qdisc 380 384 * instance exceeds its total buffer size limit.
+1
include/net/hotdata.h
··· 42 42 int netdev_budget_usecs; 43 43 int tstamp_prequeue; 44 44 int max_backlog; 45 + int qdisc_max_burst; 45 46 int dev_tx_weight; 46 47 int dev_rx_weight; 47 48 int sysctl_max_skb_frags;
+12 -1
include/net/ip_tunnels.h
··· 19 19 #include <net/rtnetlink.h> 20 20 #include <net/lwtunnel.h> 21 21 #include <net/dst_cache.h> 22 + #include <net/netdev_lock.h> 22 23 23 24 #if IS_ENABLED(CONFIG_IPV6) 24 25 #include <net/ipv6.h> ··· 373 372 fl4->flowi4_flags = flow_flags; 374 373 } 375 374 376 - int ip_tunnel_init(struct net_device *dev); 375 + int __ip_tunnel_init(struct net_device *dev); 376 + #define ip_tunnel_init(DEV) \ 377 + ({ \ 378 + struct net_device *__dev = (DEV); \ 379 + int __res = __ip_tunnel_init(__dev); \ 380 + \ 381 + if (!__res) \ 382 + netdev_lockdep_set_classes(__dev);\ 383 + __res; \ 384 + }) 385 + 377 386 void ip_tunnel_uninit(struct net_device *dev); 378 387 void ip_tunnel_dellink(struct net_device *dev, struct list_head *head); 379 388 struct net *ip_tunnel_get_link_net(const struct net_device *dev);
+6
include/scsi/scsi_eh.h
··· 41 41 unsigned char cmnd[32]; 42 42 struct scsi_data_buffer sdb; 43 43 struct scatterlist sense_sgl; 44 + 45 + /* struct request fields */ 46 + #ifdef CONFIG_BLK_INLINE_ENCRYPTION 47 + struct bio_crypt_ctx *rq_crypt_ctx; 48 + struct blk_crypto_keyslot *rq_crypt_keyslot; 49 + #endif 44 50 }; 45 51 46 52 extern void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd,
-9
include/uapi/linux/media/arm/mali-c55-config.h
··· 195 195 } __attribute__((packed)); 196 196 197 197 /** 198 - * enum mali_c55_param_buffer_version - Mali-C55 parameters block versioning 199 - * 200 - * @MALI_C55_PARAM_BUFFER_V1: First version of Mali-C55 parameters block 201 - */ 202 - enum mali_c55_param_buffer_version { 203 - MALI_C55_PARAM_BUFFER_V1, 204 - }; 205 - 206 - /** 207 198 * enum mali_c55_param_block_type - Enumeration of Mali-C55 parameter blocks 208 199 * 209 200 * This enumeration defines the types of Mali-C55 parameters block. Each block
+1 -1
include/uapi/linux/perf_event.h
··· 2 2 /* 3 3 * Performance events: 4 4 * 5 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 8 8 *
+1 -1
include/uapi/linux/xattr.h
··· 23 23 #define XATTR_REPLACE 0x2 /* set value, fail if attr does not exist */ 24 24 25 25 struct xattr_args { 26 - __aligned_u64 __user value; 26 + __aligned_u64 value; 27 27 __u32 size; 28 28 __u32 flags; 29 29 };
+4 -7
io_uring/io-wq.c
··· 947 947 return ret; 948 948 } 949 949 950 - static bool io_wq_for_each_worker(struct io_wq *wq, 950 + static void io_wq_for_each_worker(struct io_wq *wq, 951 951 bool (*func)(struct io_worker *, void *), 952 952 void *data) 953 953 { 954 - for (int i = 0; i < IO_WQ_ACCT_NR; i++) { 955 - if (!io_acct_for_each_worker(&wq->acct[i], func, data)) 956 - return false; 957 - } 958 - 959 - return true; 954 + for (int i = 0; i < IO_WQ_ACCT_NR; i++) 955 + if (io_acct_for_each_worker(&wq->acct[i], func, data)) 956 + break; 960 957 } 961 958 962 959 static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+5
kernel/bpf/verifier.c
··· 9609 9609 if (reg->type != PTR_TO_MAP_VALUE) 9610 9610 return -EINVAL; 9611 9611 9612 + if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) { 9613 + verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno); 9614 + return -EACCES; 9615 + } 9616 + 9612 9617 if (!bpf_map_is_rdonly(map)) { 9613 9618 verbose(env, "R%d does not point to a readonly map'\n", regno); 9614 9619 return -EACCES;
+1 -1
kernel/cgroup/cgroup.c
··· 5847 5847 int ret; 5848 5848 5849 5849 /* allocate the cgroup and its ID, 0 is reserved for the root */ 5850 - cgrp = kzalloc(struct_size(cgrp, ancestors, (level + 1)), GFP_KERNEL); 5850 + cgrp = kzalloc(struct_size(cgrp, _low_ancestors, level), GFP_KERNEL); 5851 5851 if (!cgrp) 5852 5852 return ERR_PTR(-ENOMEM); 5853 5853
+1 -1
kernel/events/callchain.c
··· 2 2 /* 3 3 * Performance events callchain code, extracted from core.c: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+7 -1
kernel/events/core.c
··· 2 2 /* 3 3 * Performance events core code: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com> ··· 11906 11906 } 11907 11907 } 11908 11908 11909 + static void perf_swevent_destroy_hrtimer(struct perf_event *event) 11910 + { 11911 + hrtimer_cancel(&event->hw.hrtimer); 11912 + } 11913 + 11909 11914 static void perf_swevent_init_hrtimer(struct perf_event *event) 11910 11915 { 11911 11916 struct hw_perf_event *hwc = &event->hw; ··· 11919 11914 return; 11920 11915 11921 11916 hrtimer_setup(&hwc->hrtimer, perf_swevent_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); 11917 + event->destroy = perf_swevent_destroy_hrtimer; 11922 11918 11923 11919 /* 11924 11920 * Since hrtimers have a fixed rate, we can do a static freq->period
+1 -1
kernel/events/ring_buffer.c
··· 2 2 /* 3 3 * Performance events ring-buffer code: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+1 -1
kernel/irq/debugfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - // Copyright 2017 Thomas Gleixner <tglx@linutronix.de> 2 + // Copyright 2017 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 4 4 #include <linux/irqdomain.h> 5 5 #include <linux/irq.h>
+1 -1
kernel/irq/matrix.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2017 Thomas Gleixner <tglx@linutronix.de> 2 + // Copyright (C) 2017 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 4 4 #include <linux/spinlock.h> 5 5 #include <linux/seq_file.h>
+10 -4
kernel/power/swap.c
··· 902 902 for (thr = 0; thr < nr_threads; thr++) { 903 903 if (data[thr].thr) 904 904 kthread_stop(data[thr].thr); 905 - acomp_request_free(data[thr].cr); 906 - crypto_free_acomp(data[thr].cc); 905 + if (data[thr].cr) 906 + acomp_request_free(data[thr].cr); 907 + 908 + if (!IS_ERR_OR_NULL(data[thr].cc)) 909 + crypto_free_acomp(data[thr].cc); 907 910 } 908 911 vfree(data); 909 912 } ··· 1502 1499 for (thr = 0; thr < nr_threads; thr++) { 1503 1500 if (data[thr].thr) 1504 1501 kthread_stop(data[thr].thr); 1505 - acomp_request_free(data[thr].cr); 1506 - crypto_free_acomp(data[thr].cc); 1502 + if (data[thr].cr) 1503 + acomp_request_free(data[thr].cr); 1504 + 1505 + if (!IS_ERR_OR_NULL(data[thr].cc)) 1506 + crypto_free_acomp(data[thr].cc); 1507 1507 } 1508 1508 vfree(data); 1509 1509 }
+3 -2
kernel/sched/core.c
··· 10694 10694 sched_mm_cid_exit(t); 10695 10695 } 10696 10696 10697 - /* Reactivate MM CID after successful execve() */ 10697 + /* Reactivate MM CID after execve() */ 10698 10698 void sched_mm_cid_after_execve(struct task_struct *t) 10699 10699 { 10700 - sched_mm_cid_fork(t); 10700 + if (t->mm) 10701 + sched_mm_cid_fork(t); 10701 10702 } 10702 10703 10703 10704 static void mm_cid_work_fn(struct work_struct *work)
+1 -1
kernel/sched/fair.c
··· 15 15 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> 16 16 * 17 17 * Scaled math optimizations by Thomas Gleixner 18 - * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de> 18 + * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 19 19 * 20 20 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra 21 21 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1 -1
kernel/sched/pelt.c
··· 15 15 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> 16 16 * 17 17 * Scaled math optimizations by Thomas Gleixner 18 - * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de> 18 + * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 19 19 * 20 20 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra 21 21 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1 -1
kernel/time/clockevents.c
··· 2 2 /* 3 3 * This file contains functions which manage clock event devices. 4 4 * 5 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 7 7 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 8 8 */
+1 -1
kernel/time/hrtimer.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 5 5 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner 6 6 *
+1 -1
kernel/time/tick-broadcast.c
··· 3 3 * This file contains functions which emulate a local clock-event 4 4 * device via a broadcast event source. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-common.c
··· 3 3 * This file contains the base functions to manage periodic tick 4 4 * related events. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-oneshot.c
··· 3 3 * This file contains functions which manage high resolution tick 4 4 * related events. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-sched.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 5 5 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner 6 6 *
+2
kernel/trace/ring_buffer.c
··· 3137 3137 list) { 3138 3138 list_del_init(&bpage->list); 3139 3139 free_buffer_page(bpage); 3140 + 3141 + cond_resched(); 3140 3142 } 3141 3143 } 3142 3144 out_err_unlock:
+7 -1
kernel/trace/trace.c
··· 138 138 * by commas. 139 139 */ 140 140 /* Set to string format zero to disable by default */ 141 - char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; 141 + static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; 142 142 143 143 /* When set, tracing will stop when a WARN*() is hit */ 144 144 static int __disable_trace_on_warning; ··· 3012 3012 struct ftrace_stack *fstack; 3013 3013 struct stack_entry *entry; 3014 3014 int stackidx; 3015 + int bit; 3016 + 3017 + bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START); 3018 + if (bit < 0) 3019 + return; 3015 3020 3016 3021 /* 3017 3022 * Add one, for this function and the call to save_stack_trace() ··· 3085 3080 /* Again, don't let gcc optimize things here */ 3086 3081 barrier(); 3087 3082 __this_cpu_dec(ftrace_stack_reserve); 3083 + trace_clear_recursion(bit); 3088 3084 } 3089 3085 3090 3086 static inline void ftrace_trace_stack(struct trace_array *tr,
+3 -4
kernel/trace/trace_events.c
··· 826 826 * When soft_disable is set and enable is set, we want to 827 827 * register the tracepoint for the event, but leave the event 828 828 * as is. That means, if the event was already enabled, we do 829 - * nothing (but set soft_mode). If the event is disabled, we 830 - * set SOFT_DISABLED before enabling the event tracepoint, so 831 - * it still seems to be disabled. 829 + * nothing. If the event is disabled, we set SOFT_DISABLED 830 + * before enabling the event tracepoint, so it still seems 831 + * to be disabled. 832 832 */ 833 833 if (!soft_disable) 834 834 clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags); 835 835 else { 836 836 if (atomic_inc_return(&file->sm_ref) > 1) 837 837 break; 838 - soft_mode = true; 839 838 /* Enable use of trace_buffered_event */ 840 839 trace_buffered_event_enable(); 841 840 }
+2 -2
lib/crypto/aes.c
··· 13 13 * Emit the sbox as volatile const to prevent the compiler from doing 14 14 * constant folding on sbox references involving fixed indexes. 15 15 */ 16 - static volatile const u8 __cacheline_aligned aes_sbox[] = { 16 + static volatile const u8 ____cacheline_aligned aes_sbox[] = { 17 17 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 18 18 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 19 19 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, ··· 48 48 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, 49 49 }; 50 50 51 - static volatile const u8 __cacheline_aligned aes_inv_sbox[] = { 51 + static volatile const u8 ____cacheline_aligned aes_inv_sbox[] = { 52 52 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 53 53 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, 54 54 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87,
+1 -1
lib/crypto/tests/polyval_kunit.c
··· 183 183 184 184 rand_bytes(state.raw_key, sizeof(state.raw_key)); 185 185 polyval_preparekey(&state.expected_key, state.raw_key); 186 - kunit_run_irq_test(test, polyval_irq_test_func, 20000, &state); 186 + kunit_run_irq_test(test, polyval_irq_test_func, 200000, &state); 187 187 } 188 188 189 189 static int polyval_suite_init(struct kunit_suite *suite)
+1 -1
lib/debugobjects.c
··· 2 2 /* 3 3 * Generic infrastructure for lifetime debugging of objects. 4 4 * 5 - * Copyright (C) 2008, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "ODEBUG: " fmt
+1 -1
lib/plist.c
··· 10 10 * 2001-2005 (c) MontaVista Software, Inc. 11 11 * Daniel Walker <dwalker@mvista.com> 12 12 * 13 - * (C) 2005 Thomas Gleixner <tglx@linutronix.de> 13 + * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 14 14 * 15 15 * Simplifications of the original code by 16 16 * Oleg Nesterov <oleg@tv-sign.ru>
+1 -1
lib/reed_solomon/decode_rs.c
··· 5 5 * Copyright 2002, Phil Karn, KA9Q 6 6 * May be used under the terms of the GNU General Public License (GPL) 7 7 * 8 - * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de) 8 + * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org) 9 9 * 10 10 * Generic data width independent code which is included by the wrappers. 11 11 */
+1 -1
lib/reed_solomon/encode_rs.c
··· 5 5 * Copyright 2002, Phil Karn, KA9Q 6 6 * May be used under the terms of the GNU General Public License (GPL) 7 7 * 8 - * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de) 8 + * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org) 9 9 * 10 10 * Generic data width independent code which is included by the wrappers. 11 11 */
+1 -1
lib/reed_solomon/reed_solomon.c
··· 2 2 /* 3 3 * Generic Reed Solomon encoder / decoder library 4 4 * 5 - * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de) 5 + * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org) 6 6 * 7 7 * Reed Solomon code lifted from reed solomon library written by Phil Karn 8 8 * Copyright 2002 Phil Karn, KA9Q
+1
net/bluetooth/hci_sync.c
··· 4420 4420 if (bis_capable(hdev)) { 4421 4421 events[1] |= 0x20; /* LE PA Report */ 4422 4422 events[1] |= 0x40; /* LE PA Sync Established */ 4423 + events[1] |= 0x80; /* LE PA Sync Lost */ 4423 4424 events[3] |= 0x04; /* LE Create BIG Complete */ 4424 4425 events[3] |= 0x08; /* LE Terminate BIG Complete */ 4425 4426 events[3] |= 0x10; /* LE BIG Sync Established */
+17 -8
net/bpf/test_run.c
··· 1294 1294 batch_size = NAPI_POLL_WEIGHT; 1295 1295 else if (batch_size > TEST_XDP_MAX_BATCH) 1296 1296 return -E2BIG; 1297 - 1298 - headroom += sizeof(struct xdp_page_head); 1299 1297 } else if (batch_size) { 1300 1298 return -EINVAL; 1301 1299 } ··· 1306 1308 /* There can't be user provided data before the meta data */ 1307 1309 if (ctx->data_meta || ctx->data_end > kattr->test.data_size_in || 1308 1310 ctx->data > ctx->data_end || 1309 - unlikely(xdp_metalen_invalid(ctx->data)) || 1310 1311 (do_live && (kattr->test.data_out || kattr->test.ctx_out))) 1311 1312 goto free_ctx; 1312 - /* Meta data is allocated from the headroom */ 1313 - headroom -= ctx->data; 1314 1313 1315 1314 meta_sz = ctx->data; 1315 + if (xdp_metalen_invalid(meta_sz) || meta_sz > headroom - sizeof(struct xdp_frame)) 1316 + goto free_ctx; 1317 + 1318 + /* Meta data is allocated from the headroom */ 1319 + headroom -= meta_sz; 1316 1320 linear_sz = ctx->data_end; 1317 1321 } 1322 + 1323 + /* The xdp_page_head structure takes up space in each page, limiting the 1324 + * size of the packet data; add the extra size to headroom here to make 1325 + * sure it's accounted in the length checks below, but not in the 1326 + * metadata size check above. 1327 + */ 1328 + if (do_live) 1329 + headroom += sizeof(struct xdp_page_head); 1318 1330 1319 1331 max_linear_sz = PAGE_SIZE - headroom - tailroom; 1320 1332 linear_sz = min_t(u32, linear_sz, max_linear_sz); ··· 1363 1355 1364 1356 if (sinfo->nr_frags == MAX_SKB_FRAGS) { 1365 1357 ret = -ENOMEM; 1366 - goto out; 1358 + goto out_put_dev; 1367 1359 } 1368 1360 1369 1361 page = alloc_page(GFP_KERNEL); 1370 1362 if (!page) { 1371 1363 ret = -ENOMEM; 1372 - goto out; 1364 + goto out_put_dev; 1373 1365 } 1374 1366 1375 1367 frag = &sinfo->frags[sinfo->nr_frags++]; ··· 1381 1373 if (copy_from_user(page_address(page), data_in + size, 1382 1374 data_len)) { 1383 1375 ret = -EFAULT; 1384 - goto out; 1376 + goto out_put_dev; 1385 1377 } 1386 1378 sinfo->xdp_frags_size += data_len; 1387 1379 size += data_len; ··· 1396 1388 ret = bpf_test_run_xdp_live(prog, &xdp, repeat, batch_size, &duration); 1397 1389 else 1398 1390 ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); 1391 + out_put_dev: 1399 1392 /* We convert the xdp_buff back to an xdp_md before checking the return 1400 1393 * code so the reference count of any held netdevice will be decremented 1401 1394 * even if the test run failed.
+16 -12
net/bridge/br_fdb.c
··· 70 70 { 71 71 return !test_bit(BR_FDB_STATIC, &fdb->flags) && 72 72 !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags) && 73 - time_before_eq(fdb->updated + hold_time(br), jiffies); 73 + time_before_eq(READ_ONCE(fdb->updated) + hold_time(br), jiffies); 74 74 } 75 75 76 76 static int fdb_to_nud(const struct net_bridge *br, ··· 126 126 if (nla_put_u32(skb, NDA_FLAGS_EXT, ext_flags)) 127 127 goto nla_put_failure; 128 128 129 - ci.ndm_used = jiffies_to_clock_t(now - fdb->used); 129 + ci.ndm_used = jiffies_to_clock_t(now - READ_ONCE(fdb->used)); 130 130 ci.ndm_confirmed = 0; 131 - ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated); 131 + ci.ndm_updated = jiffies_to_clock_t(now - READ_ONCE(fdb->updated)); 132 132 ci.ndm_refcnt = 0; 133 133 if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci)) 134 134 goto nla_put_failure; ··· 551 551 */ 552 552 rcu_read_lock(); 553 553 hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) { 554 - unsigned long this_timer = f->updated + delay; 554 + unsigned long this_timer = READ_ONCE(f->updated) + delay; 555 555 556 556 if (test_bit(BR_FDB_STATIC, &f->flags) || 557 557 test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &f->flags)) { ··· 924 924 { 925 925 struct net_bridge_fdb_entry *f; 926 926 struct __fdb_entry *fe = buf; 927 + unsigned long delta; 927 928 int num = 0; 928 929 929 930 memset(buf, 0, maxnum*sizeof(struct __fdb_entry)); ··· 954 953 fe->port_hi = f->dst->port_no >> 8; 955 954 956 955 fe->is_local = test_bit(BR_FDB_LOCAL, &f->flags); 957 - if (!test_bit(BR_FDB_STATIC, &f->flags)) 958 - fe->ageing_timer_value = jiffies_delta_to_clock_t(jiffies - f->updated); 956 + if (!test_bit(BR_FDB_STATIC, &f->flags)) { 957 + delta = jiffies - READ_ONCE(f->updated); 958 + fe->ageing_timer_value = 959 + jiffies_delta_to_clock_t(delta); 960 + } 959 961 ++fe; 960 962 ++num; 961 963 } ··· 1006 1002 unsigned long now = jiffies; 1007 1003 bool fdb_modified = false; 1008 1004 1009 - if (now != fdb->updated) { 1010 - fdb->updated = now; 1005 + if (now != READ_ONCE(fdb->updated)) { 1006 + WRITE_ONCE(fdb->updated, now); 1011 1007 fdb_modified = __fdb_mark_active(fdb); 1012 1008 } 1013 1009 ··· 1246 1242 if (fdb_handle_notify(fdb, notify)) 1247 1243 modified = true; 1248 1244 1249 - fdb->used = jiffies; 1245 + WRITE_ONCE(fdb->used, jiffies); 1250 1246 if (modified) { 1251 1247 if (refresh) 1252 - fdb->updated = jiffies; 1248 + WRITE_ONCE(fdb->updated, jiffies); 1253 1249 fdb_notify(br, fdb, RTM_NEWNEIGH, true); 1254 1250 } 1255 1251 ··· 1560 1556 goto err_unlock; 1561 1557 } 1562 1558 1563 - fdb->updated = jiffies; 1559 + WRITE_ONCE(fdb->updated, jiffies); 1564 1560 1565 1561 if (READ_ONCE(fdb->dst) != p) { 1566 1562 WRITE_ONCE(fdb->dst, p); ··· 1569 1565 1570 1566 if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1571 1567 /* Refresh entry */ 1572 - fdb->used = jiffies; 1568 + WRITE_ONCE(fdb->used, jiffies); 1573 1569 } else { 1574 1570 modified = true; 1575 1571 }
+2 -2
net/bridge/br_input.c
··· 221 221 if (test_bit(BR_FDB_LOCAL, &dst->flags)) 222 222 return br_pass_frame_up(skb, false); 223 223 224 - if (now != dst->used) 225 - dst->used = now; 224 + if (now != READ_ONCE(dst->used)) 225 + WRITE_ONCE(dst->used, now); 226 226 br_forward(dst->dst, skb, local_rcv, false); 227 227 } else { 228 228 if (!mcast_hit)
+9 -1
net/can/j1939/transport.c
··· 1695 1695 1696 1696 j1939_session_timers_cancel(session); 1697 1697 j1939_session_cancel(session, J1939_XTP_ABORT_BUSY); 1698 - if (session->transmission) 1698 + if (session->transmission) { 1699 1699 j1939_session_deactivate_activate_next(session); 1700 + } else if (session->state == J1939_SESSION_WAITING_ABORT) { 1701 + /* Force deactivation for the receiver. 1702 + * If we rely on the timer starting in j1939_session_cancel, 1703 + * a second RTS call here will cancel that timer and fail 1704 + * to restart it because the state is already WAITING_ABORT. 1705 + */ 1706 + j1939_session_deactivate_activate_next(session); 1707 + } 1700 1708 1701 1709 return -EBUSY; 1702 1710 }
+10 -41
net/can/raw.c
··· 49 49 #include <linux/if_arp.h> 50 50 #include <linux/skbuff.h> 51 51 #include <linux/can.h> 52 + #include <linux/can/can-ml.h> 52 53 #include <linux/can/core.h> 53 - #include <linux/can/dev.h> /* for can_is_canxl_dev_mtu() */ 54 54 #include <linux/can/skb.h> 55 55 #include <linux/can/raw.h> 56 56 #include <net/sock.h> ··· 892 892 } 893 893 } 894 894 895 - static inline bool raw_dev_cc_enabled(struct net_device *dev, 896 - struct can_priv *priv) 897 - { 898 - /* The CANXL-only mode disables error-signalling on the CAN bus 899 - * which is needed to send CAN CC/FD frames 900 - */ 901 - if (priv) 902 - return !can_dev_in_xl_only_mode(priv); 903 - 904 - /* virtual CAN interfaces always support CAN CC */ 905 - return true; 906 - } 907 - 908 - static inline bool raw_dev_fd_enabled(struct net_device *dev, 909 - struct can_priv *priv) 910 - { 911 - /* check FD ctrlmode on real CAN interfaces */ 912 - if (priv) 913 - return (priv->ctrlmode & CAN_CTRLMODE_FD); 914 - 915 - /* check MTU for virtual CAN FD interfaces */ 916 - return (READ_ONCE(dev->mtu) >= CANFD_MTU); 917 - } 918 - 919 - static inline bool raw_dev_xl_enabled(struct net_device *dev, 920 - struct can_priv *priv) 921 - { 922 - /* check XL ctrlmode on real CAN interfaces */ 923 - if (priv) 924 - return (priv->ctrlmode & CAN_CTRLMODE_XL); 925 - 926 - /* check MTU for virtual CAN XL interfaces */ 927 - return can_is_canxl_dev_mtu(READ_ONCE(dev->mtu)); 928 - } 929 - 930 895 static unsigned int raw_check_txframe(struct raw_sock *ro, struct sk_buff *skb, 931 896 struct net_device *dev) 932 897 { 933 - struct can_priv *priv = safe_candev_priv(dev); 934 - 935 898 /* Classical CAN */ 936 - if (can_is_can_skb(skb) && raw_dev_cc_enabled(dev, priv)) 899 + if (can_is_can_skb(skb) && can_cap_enabled(dev, CAN_CAP_CC)) 937 900 return CAN_MTU; 938 901 939 902 /* CAN FD */ 940 903 if (ro->fd_frames && can_is_canfd_skb(skb) && 941 - raw_dev_fd_enabled(dev, priv)) 904 + can_cap_enabled(dev, CAN_CAP_FD)) 942 905 return CANFD_MTU; 943 906 944 907 /* CAN XL */ 945 908 if (ro->xl_frames && can_is_canxl_skb(skb) && 946 - raw_dev_xl_enabled(dev, priv)) 909 + can_cap_enabled(dev, CAN_CAP_XL)) 947 910 return CANXL_MTU; 948 911 949 912 return 0; ··· 944 981 dev = dev_get_by_index(sock_net(sk), ifindex); 945 982 if (!dev) 946 983 return -ENXIO; 984 + 985 + /* no sending on a CAN device in read-only mode */ 986 + if (can_cap_enabled(dev, CAN_CAP_RO)) { 987 + err = -EACCES; 988 + goto put_dev; 989 + } 947 990 948 991 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv), 949 992 msg->msg_flags & MSG_DONTWAIT, &err);
+2
net/ceph/messenger_v2.c
··· 2376 2376 2377 2377 ceph_decode_64_safe(&p, end, global_id, bad); 2378 2378 ceph_decode_32_safe(&p, end, con->v2.con_mode, bad); 2379 + 2379 2380 ceph_decode_32_safe(&p, end, payload_len, bad); 2381 + ceph_decode_need(&p, end, payload_len, bad); 2380 2382 2381 2383 dout("%s con %p global_id %llu con_mode %d payload_len %d\n", 2382 2384 __func__, con, global_id, con->v2.con_mode, payload_len);
+1 -1
net/ceph/mon_client.c
··· 1417 1417 if (!ret) 1418 1418 finish_hunting(monc); 1419 1419 mutex_unlock(&monc->mutex); 1420 - return 0; 1420 + return ret; 1421 1421 } 1422 1422 1423 1423 static int mon_handle_auth_bad_method(struct ceph_connection *con,
+12 -2
net/ceph/osd_client.c
··· 1586 1586 struct ceph_pg_pool_info *pi; 1587 1587 struct ceph_pg pgid, last_pgid; 1588 1588 struct ceph_osds up, acting; 1589 + bool should_be_paused; 1589 1590 bool is_read = t->flags & CEPH_OSD_FLAG_READ; 1590 1591 bool is_write = t->flags & CEPH_OSD_FLAG_WRITE; 1591 1592 bool force_resend = false; ··· 1655 1654 &last_pgid)) 1656 1655 force_resend = true; 1657 1656 1658 - if (t->paused && !target_should_be_paused(osdc, t, pi)) { 1659 - t->paused = false; 1657 + should_be_paused = target_should_be_paused(osdc, t, pi); 1658 + if (t->paused && !should_be_paused) { 1660 1659 unpaused = true; 1661 1660 } 1661 + if (t->paused != should_be_paused) { 1662 + dout("%s t %p paused %d -> %d\n", __func__, t, t->paused, 1663 + should_be_paused); 1664 + t->paused = should_be_paused; 1665 + } 1666 + 1662 1667 legacy_change = ceph_pg_compare(&t->pgid, &pgid) || 1663 1668 ceph_osds_changed(&t->acting, &acting, 1664 1669 t->used_replica || any_change); ··· 4287 4280 dout("%s osd%d unknown\n", __func__, osd->o_osd); 4288 4281 goto out_unlock; 4289 4282 } 4283 + 4284 + osd->o_sparse_op_idx = -1; 4285 + ceph_init_sparse_read(&osd->o_sparse_read); 4290 4286 4291 4287 if (!reopen_osd(osd)) 4292 4288 kick_osd_requests(osd);
+15 -9
net/ceph/osdmap.c
··· 241 241 242 242 static void free_choose_arg_map(struct crush_choose_arg_map *arg_map) 243 243 { 244 - if (arg_map) { 245 - int i, j; 244 + int i, j; 246 245 247 - WARN_ON(!RB_EMPTY_NODE(&arg_map->node)); 246 + if (!arg_map) 247 + return; 248 248 249 + WARN_ON(!RB_EMPTY_NODE(&arg_map->node)); 250 + 251 + if (arg_map->args) { 249 252 for (i = 0; i < arg_map->size; i++) { 250 253 struct crush_choose_arg *arg = &arg_map->args[i]; 251 - 252 - for (j = 0; j < arg->weight_set_size; j++) 253 - kfree(arg->weight_set[j].weights); 254 - kfree(arg->weight_set); 254 + if (arg->weight_set) { 255 + for (j = 0; j < arg->weight_set_size; j++) 256 + kfree(arg->weight_set[j].weights); 257 + kfree(arg->weight_set); 258 + } 255 259 kfree(arg->ids); 256 260 } 257 261 kfree(arg_map->args); 258 - kfree(arg_map); 259 262 } 263 + kfree(arg_map); 260 264 } 261 265 262 266 DEFINE_RB_FUNCS(choose_arg_map, struct crush_choose_arg_map, choose_args_index, ··· 1983 1979 sizeof(u64) + sizeof(u32), e_inval); 1984 1980 ceph_decode_copy(p, &fsid, sizeof(fsid)); 1985 1981 epoch = ceph_decode_32(p); 1986 - BUG_ON(epoch != map->epoch+1); 1987 1982 ceph_decode_copy(p, &modified, sizeof(modified)); 1988 1983 new_pool_max = ceph_decode_64(p); 1989 1984 new_flags = ceph_decode_32(p); 1985 + 1986 + if (epoch != map->epoch + 1) 1987 + goto e_inval; 1990 1988 1991 1989 /* full map? */ 1992 1990 ceph_decode_32_safe(p, end, len, e_inval);
+22 -9
net/core/dev.c
··· 477 477 ARPHRD_IEEE1394, ARPHRD_EUI64, ARPHRD_INFINIBAND, ARPHRD_SLIP, 478 478 ARPHRD_CSLIP, ARPHRD_SLIP6, ARPHRD_CSLIP6, ARPHRD_RSRVD, 479 479 ARPHRD_ADAPT, ARPHRD_ROSE, ARPHRD_X25, ARPHRD_HWX25, 480 + ARPHRD_CAN, ARPHRD_MCTP, 480 481 ARPHRD_PPP, ARPHRD_CISCO, ARPHRD_LAPB, ARPHRD_DDCMP, 481 - ARPHRD_RAWHDLC, ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 482 + ARPHRD_RAWHDLC, ARPHRD_RAWIP, 483 + ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 482 484 ARPHRD_SKIP, ARPHRD_LOOPBACK, ARPHRD_LOCALTLK, ARPHRD_FDDI, 483 485 ARPHRD_BIF, ARPHRD_SIT, ARPHRD_IPDDP, ARPHRD_IPGRE, 484 486 ARPHRD_PIMREG, ARPHRD_HIPPI, ARPHRD_ASH, ARPHRD_ECONET, 485 487 ARPHRD_IRDA, ARPHRD_FCPP, ARPHRD_FCAL, ARPHRD_FCPL, 486 488 ARPHRD_FCFABRIC, ARPHRD_IEEE80211, ARPHRD_IEEE80211_PRISM, 487 - ARPHRD_IEEE80211_RADIOTAP, ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 488 - ARPHRD_IEEE802154, ARPHRD_VOID, ARPHRD_NONE}; 489 + ARPHRD_IEEE80211_RADIOTAP, 490 + ARPHRD_IEEE802154, ARPHRD_IEEE802154_MONITOR, 491 + ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 492 + ARPHRD_CAIF, ARPHRD_IP6GRE, ARPHRD_NETLINK, ARPHRD_6LOWPAN, 493 + ARPHRD_VSOCKMON, 494 + ARPHRD_VOID, ARPHRD_NONE}; 489 495 490 496 static const char *const netdev_lock_name[] = { 491 497 "_xmit_NETROM", "_xmit_ETHER", "_xmit_EETHER", "_xmit_AX25", ··· 500 494 "_xmit_IEEE1394", "_xmit_EUI64", "_xmit_INFINIBAND", "_xmit_SLIP", 501 495 "_xmit_CSLIP", "_xmit_SLIP6", "_xmit_CSLIP6", "_xmit_RSRVD", 502 496 "_xmit_ADAPT", "_xmit_ROSE", "_xmit_X25", "_xmit_HWX25", 497 + "_xmit_CAN", "_xmit_MCTP", 503 498 "_xmit_PPP", "_xmit_CISCO", "_xmit_LAPB", "_xmit_DDCMP", 504 - "_xmit_RAWHDLC", "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 499 + "_xmit_RAWHDLC", "_xmit_RAWIP", 500 + "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 505 501 "_xmit_SKIP", "_xmit_LOOPBACK", "_xmit_LOCALTLK", "_xmit_FDDI", 506 502 "_xmit_BIF", "_xmit_SIT", "_xmit_IPDDP", "_xmit_IPGRE", 507 503 "_xmit_PIMREG", "_xmit_HIPPI", "_xmit_ASH", "_xmit_ECONET", 508 504 "_xmit_IRDA", "_xmit_FCPP", "_xmit_FCAL", "_xmit_FCPL", 509 505 "_xmit_FCFABRIC", "_xmit_IEEE80211", "_xmit_IEEE80211_PRISM", 510 - "_xmit_IEEE80211_RADIOTAP", "_xmit_PHONET", "_xmit_PHONET_PIPE", 511 - "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"}; 506 + "_xmit_IEEE80211_RADIOTAP", 507 + "_xmit_IEEE802154", "_xmit_IEEE802154_MONITOR", 508 + "_xmit_PHONET", "_xmit_PHONET_PIPE", 509 + "_xmit_CAIF", "_xmit_IP6GRE", "_xmit_NETLINK", "_xmit_6LOWPAN", 510 + "_xmit_VSOCKMON", 511 + "_xmit_VOID", "_xmit_NONE"}; 512 512 513 513 static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)]; 514 514 static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)]; ··· 527 515 if (netdev_lock_type[i] == dev_type) 528 516 return i; 529 517 /* the last key is used by default */ 518 + WARN_ONCE(1, "netdev_lock_pos() could not find dev_type=%u\n", dev_type); 530 519 return ARRAY_SIZE(netdev_lock_type) - 1; 531 520 } 532 521 ··· 4202 4189 do { 4203 4190 if (first_n && !defer_count) { 4204 4191 defer_count = atomic_long_inc_return(&q->defer_count); 4205 - if (unlikely(defer_count > READ_ONCE(q->limit))) { 4206 - kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP); 4192 + if (unlikely(defer_count > READ_ONCE(net_hotdata.qdisc_max_burst))) { 4193 + kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_BURST_DROP); 4207 4194 return NET_XMIT_DROP; 4208 4195 } 4209 4196 } ··· 4221 4208 ll_list = llist_del_all(&q->defer_list); 4222 4209 /* There is a small race because we clear defer_count not atomically 4223 4210 * with the prior llist_del_all(). This means defer_list could grow 4224 - * over q->limit. 4211 + * over qdisc_max_burst. 4225 4212 */ 4226 4213 atomic_long_set(&q->defer_count, 0); 4227 4214
+1
net/core/dst.c
··· 68 68 dst->lwtstate = NULL; 69 69 rcuref_init(&dst->__rcuref, 1); 70 70 INIT_LIST_HEAD(&dst->rt_uncached); 71 + dst->rt_uncached_list = NULL; 71 72 dst->__use = 0; 72 73 dst->lastuse = jiffies; 73 74 dst->flags = flags;
+1
net/core/hotdata.c
··· 17 17 18 18 .tstamp_prequeue = 1, 19 19 .max_backlog = 1000, 20 + .qdisc_max_burst = 1000, 20 21 .dev_tx_weight = 64, 21 22 .dev_rx_weight = 64, 22 23 .sysctl_max_skb_frags = MAX_SKB_FRAGS,
+7
net/core/sysctl_net_core.c
··· 430 430 .proc_handler = proc_dointvec 431 431 }, 432 432 { 433 + .procname = "qdisc_max_burst", 434 + .data = &net_hotdata.qdisc_max_burst, 435 + .maxlen = sizeof(int), 436 + .mode = 0644, 437 + .proc_handler = proc_dointvec 438 + }, 439 + { 433 440 .procname = "netdev_rss_key", 434 441 .data = &netdev_rss_key, 435 442 .maxlen = sizeof(int),
+2 -2
net/ipv4/esp4_offload.c
··· 122 122 struct sk_buff *skb, 123 123 netdev_features_t features) 124 124 { 125 - const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 126 - XFRM_MODE_SKB_CB(skb)->protocol); 125 + struct xfrm_offload *xo = xfrm_offload(skb); 126 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, xo->proto); 127 127 __be16 type = inner_mode->family == AF_INET6 ? htons(ETH_P_IPV6) 128 128 : htons(ETH_P_IP); 129 129
+9 -2
net/ipv4/ip_gre.c
··· 891 891 const void *daddr, const void *saddr, unsigned int len) 892 892 { 893 893 struct ip_tunnel *t = netdev_priv(dev); 894 - struct iphdr *iph; 895 894 struct gre_base_hdr *greh; 895 + struct iphdr *iph; 896 + int needed; 896 897 897 - iph = skb_push(skb, t->hlen + sizeof(*iph)); 898 + needed = t->hlen + sizeof(*iph); 899 + if (skb_headroom(skb) < needed && 900 + pskb_expand_head(skb, HH_DATA_ALIGN(needed - skb_headroom(skb)), 901 + 0, GFP_ATOMIC)) 902 + return -needed; 903 + 904 + iph = skb_push(skb, needed); 898 905 greh = (struct gre_base_hdr *)(iph+1); 899 906 greh->flags = gre_tnl_flags_to_gre_flags(t->parms.o_flags); 900 907 greh->protocol = htons(type);
+2 -3
net/ipv4/ip_tunnel.c
··· 1281 1281 } 1282 1282 EXPORT_SYMBOL_GPL(ip_tunnel_changelink); 1283 1283 1284 - int ip_tunnel_init(struct net_device *dev) 1284 + int __ip_tunnel_init(struct net_device *dev) 1285 1285 { 1286 1286 struct ip_tunnel *tunnel = netdev_priv(dev); 1287 1287 struct iphdr *iph = &tunnel->parms.iph; ··· 1308 1308 1309 1309 if (tunnel->collect_md) 1310 1310 netif_keep_dst(dev); 1311 - netdev_lockdep_set_classes(dev); 1312 1311 return 0; 1313 1312 } 1314 - EXPORT_SYMBOL_GPL(ip_tunnel_init); 1313 + EXPORT_SYMBOL_GPL(__ip_tunnel_init); 1315 1314 1316 1315 void ip_tunnel_uninit(struct net_device *dev) 1317 1316 {
+2 -2
net/ipv4/route.c
··· 1537 1537 1538 1538 void rt_del_uncached_list(struct rtable *rt) 1539 1539 { 1540 - if (!list_empty(&rt->dst.rt_uncached)) { 1541 - struct uncached_list *ul = rt->dst.rt_uncached_list; 1540 + struct uncached_list *ul = rt->dst.rt_uncached_list; 1542 1541 1542 + if (ul) { 1543 1543 spin_lock_bh(&ul->lock); 1544 1544 list_del_init(&rt->dst.rt_uncached); 1545 1545 spin_unlock_bh(&ul->lock);
+2 -2
net/ipv6/addrconf.c
··· 3112 3112 in6_ifa_hold(ifp); 3113 3113 read_unlock_bh(&idev->lock); 3114 3114 3115 - ipv6_del_addr(ifp); 3116 - 3117 3115 if (!(ifp->flags & IFA_F_TEMPORARY) && 3118 3116 (ifp->flags & IFA_F_MANAGETEMPADDR)) 3119 3117 delete_tempaddrs(idev, ifp); 3118 + 3119 + ipv6_del_addr(ifp); 3120 3120 3121 3121 addrconf_verify_rtnl(net); 3122 3122 if (ipv6_addr_is_multicast(pfx)) {
+2 -2
net/ipv6/esp6_offload.c
··· 158 158 struct sk_buff *skb, 159 159 netdev_features_t features) 160 160 { 161 - const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, 162 - XFRM_MODE_SKB_CB(skb)->protocol); 161 + struct xfrm_offload *xo = xfrm_offload(skb); 162 + const struct xfrm_mode *inner_mode = xfrm_ip2inner_mode(x, xo->proto); 163 163 __be16 type = inner_mode->family == AF_INET ? htons(ETH_P_IP) 164 164 : htons(ETH_P_IPV6); 165 165
+1 -1
net/ipv6/ip6_tunnel.c
··· 844 844 845 845 skb_reset_network_header(skb); 846 846 847 - if (!pskb_inet_may_pull(skb)) { 847 + if (skb_vlan_inet_prepare(skb, true)) { 848 848 DEV_STATS_INC(tunnel->dev, rx_length_errors); 849 849 DEV_STATS_INC(tunnel->dev, rx_errors); 850 850 goto drop;
+2 -2
net/ipv6/route.c
··· 148 148 149 149 void rt6_uncached_list_del(struct rt6_info *rt) 150 150 { 151 - if (!list_empty(&rt->dst.rt_uncached)) { 152 - struct uncached_list *ul = rt->dst.rt_uncached_list; 151 + struct uncached_list *ul = rt->dst.rt_uncached_list; 153 152 153 + if (ul) { 154 154 spin_lock_bh(&ul->lock); 155 155 list_del_init(&rt->dst.rt_uncached); 156 156 spin_unlock_bh(&ul->lock);
+4 -2
net/sched/sch_qfq.c
··· 529 529 return 0; 530 530 531 531 destroy_class: 532 - qdisc_put(cl->qdisc); 533 - kfree(cl); 532 + if (!existing) { 533 + qdisc_put(cl->qdisc); 534 + kfree(cl); 535 + } 534 536 return err; 535 537 } 536 538
+1
net/xfrm/xfrm_state.c
··· 3151 3151 int err; 3152 3152 3153 3153 if (family == AF_INET && 3154 + (!x->dir || x->dir == XFRM_SA_DIR_OUT) && 3154 3155 READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc)) 3155 3156 x->props.flags |= XFRM_STATE_NOPMTUDISC; 3156 3157
+42
rust/helpers/bitops.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include <linux/bitops.h> 4 + #include <linux/find.h> 4 5 5 6 void rust_helper___set_bit(unsigned long nr, unsigned long *addr) 6 7 { ··· 22 21 { 23 22 clear_bit(nr, addr); 24 23 } 24 + 25 + /* 26 + * The rust_helper_ prefix is intentionally omitted below so that the 27 + * declarations in include/linux/find.h are compatible with these helpers. 28 + * 29 + * Note that the below #ifdefs mean that the helper is only created if C does 30 + * not provide a definition. 31 + */ 32 + #ifdef find_first_zero_bit 33 + __rust_helper 34 + unsigned long _find_first_zero_bit(const unsigned long *p, unsigned long size) 35 + { 36 + return find_first_zero_bit(p, size); 37 + } 38 + #endif /* find_first_zero_bit */ 39 + 40 + #ifdef find_next_zero_bit 41 + __rust_helper 42 + unsigned long _find_next_zero_bit(const unsigned long *addr, 43 + unsigned long size, unsigned long offset) 44 + { 45 + return find_next_zero_bit(addr, size, offset); 46 + } 47 + #endif /* find_next_zero_bit */ 48 + 49 + #ifdef find_first_bit 50 + __rust_helper 51 + unsigned long _find_first_bit(const unsigned long *addr, unsigned long size) 52 + { 53 + return find_first_bit(addr, size); 54 + } 55 + #endif /* find_first_bit */ 56 + 57 + #ifdef find_next_bit 58 + __rust_helper 59 + unsigned long _find_next_bit(const unsigned long *addr, unsigned long size, 60 + unsigned long offset) 61 + { 62 + return find_next_bit(addr, size, offset); 63 + } 64 + #endif /* find_next_bit */
+3 -4
rust/kernel/device.rs
··· 14 14 15 15 #[cfg(CONFIG_PRINTK)] 16 16 use crate::c_str; 17 - use crate::str::CStrExt as _; 18 17 19 18 pub mod property; 20 19 ··· 66 67 /// 67 68 /// # Implementing Bus Devices 68 69 /// 69 - /// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or 70 - /// [`platform::Device`]. 70 + /// This section provides a guideline to implement bus specific devices, such as: 71 + #[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")] 72 + /// * [`platform::Device`] 71 73 /// 72 74 /// A bus specific device should be defined as follows. 73 75 /// ··· 160 160 /// 161 161 /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 162 162 /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 163 - /// [`pci::Device`]: kernel::pci::Device 164 163 /// [`platform::Device`]: kernel::platform::Device 165 164 #[repr(transparent)] 166 165 pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>);
+1 -1
rust/kernel/device_id.rs
··· 15 15 /// # Safety 16 16 /// 17 17 /// Implementers must ensure that `Self` is layout-compatible with [`RawDeviceId::RawType`]; 18 - /// i.e. it's safe to transmute to `RawDeviceId`. 18 + /// i.e. it's safe to transmute to `RawType`. 19 19 /// 20 20 /// This requirement is needed so `IdArray::new` can convert `Self` to `RawType` when building 21 21 /// the ID table.
+3 -4
rust/kernel/dma.rs
··· 27 27 /// Trait to be implemented by DMA capable bus devices. 28 28 /// 29 29 /// The [`dma::Device`](Device) trait should be implemented by bus specific device representations, 30 - /// where the underlying bus is DMA capable, such as [`pci::Device`](::kernel::pci::Device) or 31 - /// [`platform::Device`](::kernel::platform::Device). 30 + /// where the underlying bus is DMA capable, such as: 31 + #[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")] 32 + /// * [`platform::Device`](::kernel::platform::Device) 32 33 pub trait Device: AsRef<device::Device<Core>> { 33 34 /// Set up the device's DMA streaming addressing capabilities. 34 35 /// ··· 533 532 /// 534 533 /// # Safety 535 534 /// 536 - /// * Callers must ensure that the device does not read/write to/from memory while the returned 537 - /// slice is live. 538 535 /// * Callers must ensure that this call does not race with a read or write to the same region 539 536 /// that overlaps with this write. 540 537 ///
+8 -4
rust/kernel/driver.rs
··· 33 33 //! } 34 34 //! ``` 35 35 //! 36 - //! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`]. 36 + //! For specific examples see: 37 + //! 38 + //! * [`platform::Driver`](kernel::platform::Driver) 39 + #![cfg_attr( 40 + CONFIG_AUXILIARY_BUS, 41 + doc = "* [`auxiliary::Driver`](kernel::auxiliary::Driver)" 42 + )] 43 + #![cfg_attr(CONFIG_PCI, doc = "* [`pci::Driver`](kernel::pci::Driver)")] 37 44 //! 38 45 //! The `probe()` callback should return a `impl PinInit<Self, Error>`, i.e. the driver's private 39 46 //! data. The bus abstraction should store the pointer in the corresponding bus device. The generic ··· 86 79 //! 87 80 //! For this purpose the generic infrastructure in [`device_id`] should be used. 88 81 //! 89 - //! [`auxiliary::Driver`]: kernel::auxiliary::Driver 90 82 //! [`Core`]: device::Core 91 83 //! [`Device`]: device::Device 92 84 //! [`Device<Core>`]: device::Device<device::Core> ··· 93 87 //! [`DeviceContext`]: device::DeviceContext 94 88 //! [`device_id`]: kernel::device_id 95 89 //! [`module_driver`]: kernel::module_driver 96 - //! [`pci::Driver`]: kernel::pci::Driver 97 - //! [`platform::Driver`]: kernel::platform::Driver 98 90 99 91 use crate::error::{Error, Result}; 100 92 use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+2 -2
rust/kernel/pci/io.rs
··· 20 20 /// 21 21 /// # Invariants 22 22 /// 23 - /// `Bar` always holds an `IoRaw` inststance that holds a valid pointer to the start of the I/O 23 + /// `Bar` always holds an `IoRaw` instance that holds a valid pointer to the start of the I/O 24 24 /// memory mapped PCI BAR and its size. 25 25 pub struct Bar<const SIZE: usize = 0> { 26 26 pdev: ARef<Device>, ··· 54 54 let ioptr: usize = unsafe { bindings::pci_iomap(pdev.as_raw(), num, 0) } as usize; 55 55 if ioptr == 0 { 56 56 // SAFETY: 57 - // `pdev` valid by the invariants of `Device`. 57 + // `pdev` is valid by the invariants of `Device`. 58 58 // `num` is checked for validity by a previous call to `Device::resource_len`. 59 59 unsafe { bindings::pci_release_region(pdev.as_raw(), num) }; 60 60 return Err(ENOMEM);
+1 -1
scripts/crypto/gen-hash-testvecs.py
··· 118 118 def alg_digest_size_const(alg): 119 119 if alg.startswith('blake2'): 120 120 return f'{alg.upper()}_HASH_SIZE' 121 - return f'{alg.upper().replace('-', '_')}_DIGEST_SIZE' 121 + return f"{alg.upper().replace('-', '_')}_DIGEST_SIZE" 122 122 123 123 def gen_unkeyed_testvecs(alg): 124 124 print('')
+1 -1
scripts/spdxcheck.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0 3 - # Copyright Thomas Gleixner <tglx@linutronix.de> 3 + # Copyright Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 5 5 from argparse import ArgumentParser 6 6 from ply import lex, yacc
+1 -1
tools/include/uapi/linux/perf_event.h
··· 2 2 /* 3 3 * Performance events: 4 4 * 5 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 8 8 *
+6 -3
tools/net/ynl/pyynl/lib/doc_generator.py
··· 165 165 continue 166 166 lines.append(self.fmt.rst_paragraph(self.fmt.bold(key), level + 1)) 167 167 if key in ['request', 'reply']: 168 - lines.append(self.parse_do_attributes(do_dict[key], level + 1) + "\n") 168 + lines.append(self.parse_op_attributes(do_dict[key], level + 1) + "\n") 169 169 else: 170 170 lines.append(self.fmt.headroom(level + 2) + do_dict[key] + "\n") 171 171 172 172 return "\n".join(lines) 173 173 174 - def parse_do_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str: 174 + def parse_op_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str: 175 175 """Parse 'attributes' section""" 176 176 if "attributes" not in attrs: 177 177 return "" ··· 183 183 184 184 def parse_operations(self, operations: List[Dict[str, Any]], namespace: str) -> str: 185 185 """Parse operations block""" 186 - preprocessed = ["name", "doc", "title", "do", "dump", "flags"] 186 + preprocessed = ["name", "doc", "title", "do", "dump", "flags", "event"] 187 187 linkable = ["fixed-header", "attribute-set"] 188 188 lines = [] 189 189 ··· 216 216 if "dump" in operation: 217 217 lines.append(self.fmt.rst_paragraph(":dump:", 0)) 218 218 lines.append(self.parse_do(operation["dump"], 0)) 219 + if "event" in operation: 220 + lines.append(self.fmt.rst_paragraph(":event:", 0)) 221 + lines.append(self.parse_op_attributes(operation["event"], 0)) 219 222 220 223 # New line after fields 221 224 lines.append("\n")
+1 -1
tools/perf/builtin-list.c
··· 4 4 * 5 5 * Builtin list command: list all event types 6 6 * 7 - * Copyright (C) 2009, Thomas Gleixner <tglx@linutronix.de> 7 + * Copyright (C) 2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 8 8 * Copyright (C) 2008-2009, Red Hat Inc, Ingo Molnar <mingo@redhat.com> 9 9 * Copyright (C) 2011, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> 10 10 */
+11 -3
tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
··· 47 47 struct test_xdp_context_test_run *skel = NULL; 48 48 char data[sizeof(pkt_v4) + sizeof(__u32)]; 49 49 char bad_ctx[sizeof(struct xdp_md) + 1]; 50 + char large_data[256]; 50 51 struct xdp_md ctx_in, ctx_out; 51 52 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, 52 53 .data_in = &data, ··· 95 94 test_xdp_context_error(prog_fd, opts, 4, sizeof(__u32), sizeof(data), 96 95 0, 0, 0); 97 96 98 - /* Meta data must be 255 bytes or smaller */ 99 - test_xdp_context_error(prog_fd, opts, 0, 256, sizeof(data), 0, 0, 0); 100 - 101 97 /* Total size of data must be data_end - data_meta or larger */ 102 98 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), 103 99 sizeof(data) + 1, 0, 0, 0); ··· 113 115 /* The egress cannot be specified */ 114 116 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), sizeof(data), 115 117 0, 0, 1); 118 + 119 + /* Meta data must be 216 bytes or smaller (256 - sizeof(struct 120 + * xdp_frame)). Test both nearest invalid size and nearest invalid 121 + * 4-byte-aligned size, and make sure data_in is large enough that we 122 + * actually hit the check on metadata length 123 + */ 124 + opts.data_in = large_data; 125 + opts.data_size_in = sizeof(large_data); 126 + test_xdp_context_error(prog_fd, opts, 0, 217, sizeof(large_data), 0, 0, 0); 127 + test_xdp_context_error(prog_fd, opts, 0, 220, sizeof(large_data), 0, 0, 0); 116 128 117 129 test_xdp_context_test_run__destroy(skel); 118 130 }
+2 -2
tools/testing/selftests/drivers/net/hw/toeplitz.c
··· 485 485 486 486 bitmap = strtoul(arg, NULL, 0); 487 487 488 - if (bitmap & ~(RPS_MAX_CPUS - 1)) 489 - error(1, 0, "rps bitmap 0x%lx out of bounds 0..%lu", 488 + if (bitmap & ~((1UL << RPS_MAX_CPUS) - 1)) 489 + error(1, 0, "rps bitmap 0x%lx out of bounds, max cpu %lu", 490 490 bitmap, RPS_MAX_CPUS - 1); 491 491 492 492 for (i = 0; i < RPS_MAX_CPUS; i++)
+4 -2
tools/testing/selftests/drivers/net/hw/toeplitz.py
··· 94 94 mask = 0 95 95 for cpu in rps_cpus: 96 96 mask |= (1 << cpu) 97 - mask = hex(mask)[2:] 97 + 98 + mask = hex(mask) 98 99 99 100 # Set RPS bitmap for all rx queues 100 101 for rps_file in glob.glob(f"/sys/class/net/{cfg.ifname}/queues/rx-*/rps_cpus"): 101 102 with open(rps_file, "w", encoding="utf-8") as fp: 102 - fp.write(mask) 103 + # sysfs expects hex without '0x' prefix, toeplitz.c needs the prefix 104 + fp.write(mask[2:]) 103 105 104 106 return mask 105 107
+17 -1
tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc
··· 89 89 # The id must be four bytes, test that 3 bytes fails a write 90 90 if echo -n abc > ./trace_marker_raw ; then 91 91 echo "Too small of write expected to fail but did not" 92 + echo ${ORIG} > buffer_size_kb 92 93 exit_fail 93 94 fi 94 95 ··· 100 99 101 100 if write_buffer 0xdeadbeef $size ; then 102 101 echo "Too big of write expected to fail but did not" 102 + echo ${ORIG} > buffer_size_kb 103 103 exit_fail 104 104 fi 105 105 } 106 106 107 + ORIG=`cat buffer_size_kb` 108 + 109 + # test_multiple_writes test needs at least 12KB buffer 110 + NEW_SIZE=12 111 + 112 + if [ ${ORIG} -lt ${NEW_SIZE} ]; then 113 + echo ${NEW_SIZE} > buffer_size_kb 114 + fi 115 + 107 116 test_buffer 108 - test_multiple_writes 117 + if ! test_multiple_writes; then 118 + echo ${ORIG} > buffer_size_kb 119 + exit_fail 120 + fi 121 + 122 + echo ${ORIG} > buffer_size_kb
+85 -59
tools/testing/selftests/kvm/x86/amx_test.c
··· 69 69 : : "a"(tile), "d"(0)); 70 70 } 71 71 72 + static inline int tileloadd_safe(void *tile) 73 + { 74 + return kvm_asm_safe(".byte 0xc4,0xe2,0x7b,0x4b,0x04,0x10", 75 + "a"(tile), "d"(0)); 76 + } 77 + 72 78 static inline void __tilerelease(void) 73 79 { 74 80 asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::); ··· 130 124 } 131 125 } 132 126 127 + enum { 128 + /* Retrieve TMM0 from guest, stash it for TEST_RESTORE_TILEDATA */ 129 + TEST_SAVE_TILEDATA = 1, 130 + 131 + /* Check TMM0 against tiledata */ 132 + TEST_COMPARE_TILEDATA = 2, 133 + 134 + /* Restore TMM0 from earlier save */ 135 + TEST_RESTORE_TILEDATA = 4, 136 + 137 + /* Full VM save/restore */ 138 + TEST_SAVE_RESTORE = 8, 139 + }; 140 + 133 141 static void __attribute__((__flatten__)) guest_code(struct tile_config *amx_cfg, 134 142 struct tile_data *tiledata, 135 143 struct xstate *xstate) 136 144 { 145 + int vector; 146 + 137 147 GUEST_ASSERT(this_cpu_has(X86_FEATURE_XSAVE) && 138 148 this_cpu_has(X86_FEATURE_OSXSAVE)); 139 149 check_xtile_info(); 140 - GUEST_SYNC(1); 150 + GUEST_SYNC(TEST_SAVE_RESTORE); 141 151 142 152 /* xfd=0, enable amx */ 143 153 wrmsr(MSR_IA32_XFD, 0); 144 - GUEST_SYNC(2); 154 + GUEST_SYNC(TEST_SAVE_RESTORE); 145 155 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == 0); 146 156 set_tilecfg(amx_cfg); 147 157 __ldtilecfg(amx_cfg); 148 - GUEST_SYNC(3); 158 + GUEST_SYNC(TEST_SAVE_RESTORE); 149 159 /* Check save/restore when trap to userspace */ 150 160 __tileloadd(tiledata); 151 - GUEST_SYNC(4); 161 + GUEST_SYNC(TEST_SAVE_TILEDATA | TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE); 162 + 163 + /* xfd=0x40000, disable amx tiledata */ 164 + wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA); 165 + 166 + /* host tries setting tiledata while guest XFD is set */ 167 + GUEST_SYNC(TEST_RESTORE_TILEDATA); 168 + GUEST_SYNC(TEST_SAVE_RESTORE); 169 + 170 + wrmsr(MSR_IA32_XFD, 0); 152 171 __tilerelease(); 153 - GUEST_SYNC(5); 172 + GUEST_SYNC(TEST_SAVE_RESTORE); 154 173 /* 155 174 * After XSAVEC, XTILEDATA is cleared in the xstate_bv but is set in 156 175 * the xcomp_bv. ··· 184 153 __xsavec(xstate, XFEATURE_MASK_XTILE_DATA); 185 154 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA)); 186 155 GUEST_ASSERT(xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA); 156 + 157 + /* #NM test */ 187 158 188 159 /* xfd=0x40000, disable amx tiledata */ 189 160 wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA); ··· 199 166 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA)); 200 167 GUEST_ASSERT((xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA)); 201 168 202 - GUEST_SYNC(6); 169 + GUEST_SYNC(TEST_SAVE_RESTORE); 203 170 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 204 171 set_tilecfg(amx_cfg); 205 172 __ldtilecfg(amx_cfg); 173 + 206 174 /* Trigger #NM exception */ 207 - __tileloadd(tiledata); 208 - GUEST_SYNC(10); 175 + vector = tileloadd_safe(tiledata); 176 + __GUEST_ASSERT(vector == NM_VECTOR, 177 + "Wanted #NM on tileloadd with XFD[18]=1, got %s", 178 + ex_str(vector)); 209 179 210 - GUEST_DONE(); 211 - } 212 - 213 - void guest_nm_handler(struct ex_regs *regs) 214 - { 215 - /* Check if #NM is triggered by XFEATURE_MASK_XTILE_DATA */ 216 - GUEST_SYNC(7); 217 180 GUEST_ASSERT(!(get_cr0() & X86_CR0_TS)); 218 181 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA); 219 182 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 220 - GUEST_SYNC(8); 183 + GUEST_SYNC(TEST_SAVE_RESTORE); 221 184 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA); 222 185 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA); 223 186 /* Clear xfd_err */ 224 187 wrmsr(MSR_IA32_XFD_ERR, 0); 225 188 /* xfd=0, enable amx */ 226 189 wrmsr(MSR_IA32_XFD, 0); 227 - GUEST_SYNC(9); 190 + GUEST_SYNC(TEST_SAVE_RESTORE); 191 + 192 + __tileloadd(tiledata); 193 + GUEST_SYNC(TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE); 194 + 195 + GUEST_DONE(); 228 196 } 229 197 230 198 int main(int argc, char *argv[]) ··· 234 200 struct kvm_vcpu *vcpu; 235 201 struct kvm_vm *vm; 236 202 struct kvm_x86_state *state; 203 + struct kvm_x86_state *tile_state = NULL; 237 204 int xsave_restore_size; 238 205 vm_vaddr_t amx_cfg, tiledata, xstate; 239 206 struct ucall uc; 240 - u32 amx_offset; 241 207 int ret; 242 208 243 209 /* ··· 262 228 263 229 vcpu_regs_get(vcpu, &regs1); 264 230 265 - /* Register #NM handler */ 266 - vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler); 267 - 268 231 /* amx cfg for guest_code */ 269 232 amx_cfg = vm_vaddr_alloc_page(vm); 270 233 memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize()); ··· 275 244 memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE)); 276 245 vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate); 277 246 247 + int iter = 0; 278 248 for (;;) { 279 249 vcpu_run(vcpu); 280 250 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); ··· 285 253 REPORT_GUEST_ASSERT(uc); 286 254 /* NOT REACHED */ 287 255 case UCALL_SYNC: 288 - switch (uc.args[1]) { 289 - case 1: 290 - case 2: 291 - case 3: 292 - case 5: 293 - case 6: 294 - case 7: 295 - case 8: 296 - fprintf(stderr, "GUEST_SYNC(%ld)\n", uc.args[1]); 297 - break; 298 - case 4: 299 - case 10: 300 - fprintf(stderr, 301 - "GUEST_SYNC(%ld), check save/restore status\n", uc.args[1]); 256 + ++iter; 257 + if (uc.args[1] & TEST_SAVE_TILEDATA) { 258 + fprintf(stderr, "GUEST_SYNC #%d, save tiledata\n", iter); 259 + tile_state = vcpu_save_state(vcpu); 260 + } 261 + if (uc.args[1] & TEST_COMPARE_TILEDATA) { 262 + fprintf(stderr, "GUEST_SYNC #%d, check TMM0 contents\n", iter); 302 263 303 264 /* Compacted mode, get amx offset by xsave area 304 265 * size subtract 8K amx size. 305 266 */ 306 - amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE; 307 - state = vcpu_save_state(vcpu); 308 - void *amx_start = (void *)state->xsave + amx_offset; 267 + u32 amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE; 268 + void *amx_start = (void *)tile_state->xsave + amx_offset; 309 269 void *tiles_data = (void *)addr_gva2hva(vm, tiledata); 310 270 /* Only check TMM0 register, 1 tile */ 311 271 ret = memcmp(amx_start, tiles_data, TILE_SIZE); 312 272 TEST_ASSERT(ret == 0, "memcmp failed, ret=%d", ret); 273 + } 274 + if (uc.args[1] & TEST_RESTORE_TILEDATA) { 275 + fprintf(stderr, "GUEST_SYNC #%d, before KVM_SET_XSAVE\n", iter); 276 + vcpu_xsave_set(vcpu, tile_state->xsave); 277 + fprintf(stderr, "GUEST_SYNC #%d, after KVM_SET_XSAVE\n", iter); 278 + } 279 + if (uc.args[1] & TEST_SAVE_RESTORE) { 280 + fprintf(stderr, "GUEST_SYNC #%d, save/restore VM state\n", iter); 281 + state = vcpu_save_state(vcpu); 282 + memset(&regs1, 0, sizeof(regs1)); 283 + vcpu_regs_get(vcpu, &regs1); 284 + 285 + kvm_vm_release(vm); 286 + 287 + /* Restore state in a new VM. */ 288 + vcpu = vm_recreate_with_one_vcpu(vm); 289 + vcpu_load_state(vcpu, state); 313 290 kvm_x86_state_cleanup(state); 314 - break; 315 - case 9: 316 - fprintf(stderr, 317 - "GUEST_SYNC(%ld), #NM exception and enable amx\n", uc.args[1]); 318 - break; 291 + 292 + memset(&regs2, 0, sizeof(regs2)); 293 + vcpu_regs_get(vcpu, &regs2); 294 + TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)), 295 + "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx", 296 + (ulong) regs2.rdi, (ulong) regs2.rsi); 319 297 } 320 298 break; 321 299 case UCALL_DONE: ··· 335 293 TEST_FAIL("Unknown ucall %lu", uc.cmd); 336 294 } 337 295 338 - state = vcpu_save_state(vcpu); 339 - memset(&regs1, 0, sizeof(regs1)); 340 - vcpu_regs_get(vcpu, &regs1); 341 - 342 - kvm_vm_release(vm); 343 - 344 - /* Restore state in a new VM. */ 345 - vcpu = vm_recreate_with_one_vcpu(vm); 346 - vcpu_load_state(vcpu, state); 347 - kvm_x86_state_cleanup(state); 348 - 349 - memset(&regs2, 0, sizeof(regs2)); 350 - vcpu_regs_get(vcpu, &regs2); 351 - TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)), 352 - "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx", 353 - (ulong) regs2.rdi, (ulong) regs2.rsi); 354 296 } 355 297 done: 356 298 kvm_vm_free(vm);
+12
tools/testing/vsock/util.c
··· 511 511 512 512 printf("ok\n"); 513 513 } 514 + 515 + printf("All tests have been executed. Waiting other peer..."); 516 + fflush(stdout); 517 + 518 + /* 519 + * Final full barrier, to ensure that all tests have been run and 520 + * that even the last one has been successful on both sides. 521 + */ 522 + control_writeln("COMPLETED"); 523 + control_expectln("COMPLETED"); 524 + 525 + printf("ok\n"); 514 526 } 515 527 516 528 void list_tests(const struct test_case *test_cases)