···11What: /sys/kernel/time/aux_clocks/<ID>/enable22Date: May 202533-Contact: Thomas Gleixner <tglx@linutronix.de>33+Contact: Thomas Gleixner <tglx@kernel.org>44Description:55 Controls the enablement of auxiliary clock timekeepers.
+2-2
Documentation/ABI/testing/sysfs-devices-soc
···1717contact: Lee Jones <lee@kernel.org>1818Description:1919 Read-only attribute common to all SoCs. Contains the SoC machine2020- name (e.g. Ux500).2020+ name (e.g. DB8500).21212222What: /sys/devices/socX/family2323Date: January 20122424contact: Lee Jones <lee@kernel.org>2525Description:2626 Read-only attribute common to all SoCs. Contains SoC family name2727- (e.g. DB8500).2727+ (e.g. ux500).28282929 On many of ARM based silicon with SMCCC v1.2+ compliant firmware3030 this will contain the JEDEC JEP106 manufacturer’s identification
+1-1
Documentation/arch/x86/topology.rst
···1717Needless to say, code should use the generic functions - this file is *only*1818here to *document* the inner workings of x86 topology.19192020-Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>.2020+Started by Thomas Gleixner <tglx@kernel.org> and Borislav Petkov <bp@alien8.de>.21212222The main aim of the topology facilities is to present adequate interfaces to2323code which needs to know/query/use the structure of the running system wrt
+1-1
Documentation/core-api/cpu_hotplug.rst
···88 Srivatsa Vaddagiri <vatsa@in.ibm.com>,99 Ashok Raj <ashok.raj@intel.com>,1010 Joel Schopp <jschopp@austin.ibm.com>,1111- Thomas Gleixner <tglx@linutronix.de>1111+ Thomas Gleixner <tglx@kernel.org>12121313Introduction1414============
+1-1
Documentation/core-api/genericirq.rst
···439439440440The following people have contributed to this document:441441442442-1. Thomas Gleixner tglx@linutronix.de442442+1. Thomas Gleixner tglx@kernel.org4434434444442. Ingo Molnar mingo@elte.hu
+1-1
Documentation/core-api/librs.rst
···209209210210The following people have contributed to this document:211211212212-Thomas Gleixner\ tglx@linutronix.de212212+Thomas Gleixner\ tglx@kernel.org
···41414242 areg-supply:4343 description: |4444- Regulator with AVDD at 3.3V. If not defined then the internal regulator4545- is enabled.4444+ External supply of 1.8V. If not defined then the internal regulator is4545+ enabled instead.4646+4747+ avdd-supply: true4848+ iovdd-supply: true46494750 ti,mic-bias-source:4851 description: |
···8899maintainers:1010 - Daniel Lezcano <daniel.lezcano@linaro.org>1111- - Thomas Gleixner <tglx@linutronix.de>1111+ - Thomas Gleixner <tglx@kernel.org>1212 - Rob Herring <robh@kernel.org>13131414properties:
+2-2
Documentation/driver-api/mtdnand.rst
···9969969979972. David Woodhouse\ dwmw2@infradead.org998998999999-3. Thomas Gleixner\ tglx@linutronix.de999999+3. Thomas Gleixner\ tglx@kernel.org1000100010011001A lot of users have provided bugfixes, improvements and helping hands10021002for testing. Thanks a lot.1003100310041004The following people have contributed to this document:1005100510061006-1. Thomas Gleixner\ tglx@linutronix.de10061006+1. Thomas Gleixner\ tglx@kernel.org
+1
Documentation/filesystems/locking.rst
···416416lm_breaker_owns_lease: yes no no417417lm_lock_expirable yes no no418418lm_expire_lock no no yes419419+lm_open_conflict yes no no419420====================== ============= ================= =========420421421422buffer_head
+4-2
Documentation/netlink/specs/netdev.yaml
···142142 name: ifindex143143 doc: |144144 ifindex of the netdev to which the pool belongs.145145- May be reported as 0 if the page pool was allocated for a netdev145145+ May not be reported if the page pool was allocated for a netdev146146 which got destroyed already (page pools may outlast their netdevs147147 because they wait for all memory to be returned).148148 type: u32···601601 name: page-pool-get602602 doc: |603603 Get / dump information about Page Pools.604604- (Only Page Pools associated with a net_device can be listed.)604604+ Only Page Pools associated by the driver with a net_device605605+ can be listed. ifindex will not be reported if the net_device606606+ no longer exists.605607 attribute-set: page-pool606608 do:607609 request:
+6-4
Documentation/process/maintainer-soc.rst
···57575858All typical platform related patches should be sent via SoC submaintainers5959(platform-specific maintainers). This includes also changes to per-platform or6060-shared defconfigs (scripts/get_maintainer.pl might not provide correct6161-addresses in such case).6060+shared defconfigs. Note that scripts/get_maintainer.pl might not provide6161+correct addresses for the shared defconfig, so ignore its output and manually6262+create CC-list based on MAINTAINERS file or use something like6363+``scripts/get_maintainer.pl -f drivers/soc/FOO/``).62646365Submitting Patches to the Main SoC Maintainers6466~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~···116114Usually the branch that includes a driver change will also include the117115corresponding change to the devicetree binding description, to ensure they are118116in fact compatible. This means that the devicetree branch can end up causing119119-warnings in the "make dtbs_check" step. If a devicetree change depends on117117+warnings in the ``make dtbs_check`` step. If a devicetree change depends on120118missing additions to a header file in include/dt-bindings/, it will fail the121121-"make dtbs" step and not get merged.119119+``make dtbs`` step and not get merged.122120123121There are multiple ways to deal with this:124122
···404404405405感谢以下人士对本文档作出的贡献:406406407407-1. Thomas Gleixner tglx@linutronix.de407407+1. Thomas Gleixner tglx@kernel.org4084084094092. Ingo Molnar mingo@elte.hu
+33-28
MAINTAINERS
···1283128312841284AMD XGBE DRIVER12851285M: "Shyam Sundar S K" <Shyam-sundar.S-k@amd.com>12861286+M: Raju Rangoju <Raju.Rangoju@amd.com>12861287L: netdev@vger.kernel.org12871288S: Maintained12881289F: arch/arm64/boot/dts/amd/amd-seattle-xgbe*.dtsi···20122011M: Arnd Bergmann <arnd@arndb.de>20132012M: Krzysztof Kozlowski <krzk@kernel.org>20142013M: Alexandre Belloni <alexandre.belloni@bootlin.com>20152015-M: Linus Walleij <linus.walleij@linaro.org>20142014+M: Linus Walleij <linusw@kernel.org>20162015R: Drew Fustini <fustini@kernel.org>20172016L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)20182017L: soc@lists.linux.dev···21592158L: dri-devel@lists.freedesktop.org21602159S: Supported21612160W: https://rust-for-linux.com/tyr-gpu-driver21622162-W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html21612161+W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html21632162B: https://gitlab.freedesktop.org/panfrost/linux/-/issues21642163T: git https://gitlab.freedesktop.org/drm/rust/kernel.git21652164F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml···5802580158035802CEPH COMMON CODE (LIBCEPH)58045803M: Ilya Dryomov <idryomov@gmail.com>58055805-M: Xiubo Li <xiubli@redhat.com>58045804+M: Alex Markuze <amarkuze@redhat.com>58055805+M: Viacheslav Dubeyko <slava@dubeyko.com>58065806L: ceph-devel@vger.kernel.org58075807S: Supported58085808W: http://ceph.com/···58145812F: net/ceph/5815581358165814CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)58175817-M: Xiubo Li <xiubli@redhat.com>58185815M: Ilya Dryomov <idryomov@gmail.com>58165816+M: Alex Markuze <amarkuze@redhat.com>58175817+M: Viacheslav Dubeyko <slava@dubeyko.com>58195818L: ceph-devel@vger.kernel.org58205819S: Supported58215820W: http://ceph.com/···6175617261766173CLOCKSOURCE, CLOCKEVENT DRIVERS61776174M: Daniel Lezcano <daniel.lezcano@linaro.org>61786178-M: Thomas Gleixner <tglx@linutronix.de>61756175+M: Thomas Gleixner <tglx@kernel.org>61796176L: linux-kernel@vger.kernel.org61806177S: Supported61816178T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core···65356532F: tools/testing/selftests/cpufreq/6536653365376534CPU FREQUENCY DRIVERS - VIRTUAL MACHINE CPUFREQ65386538-M: Saravana Kannan <saravanak@google.com>65356535+M: Saravana Kannan <saravanak@kernel.org>65396536L: linux-pm@vger.kernel.org65406537S: Maintained65416538F: drivers/cpufreq/virtual-cpufreq.c6542653965436540CPU HOTPLUG65446544-M: Thomas Gleixner <tglx@linutronix.de>65416541+M: Thomas Gleixner <tglx@kernel.org>65456542M: Peter Zijlstra <peterz@infradead.org>65466543L: linux-kernel@vger.kernel.org65476544S: Maintained···67086705T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-next67096706T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-fixes67106707F: lib/crypto/67086708+F: scripts/crypto/6711670967126710CRYPTO SPEED TEST COMPARE67136711M: Wang Jinchao <wangjinchao@xfusion.com>···69696965F: drivers/scsi/dc395x.*6970696669716967DEBUGOBJECTS:69726972-M: Thomas Gleixner <tglx@linutronix.de>69686968+M: Thomas Gleixner <tglx@kernel.org>69736969L: linux-kernel@vger.kernel.org69746970S: Maintained69756971T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects···71747170F: include/linux/devcoredump.h7175717171767172DEVICE DEPENDENCY HELPER SCRIPT71777177-M: Saravana Kannan <saravanak@google.com>71737173+M: Saravana Kannan <saravanak@kernel.org>71787174L: linux-kernel@vger.kernel.org71797175S: Maintained71807176F: scripts/dev-needs.sh···80718067Q: https://patchwork.freedesktop.org/project/nouveau/80728068B: https://gitlab.freedesktop.org/drm/nova/-/issues80738069C: irc://irc.oftc.net/nouveau80748074-T: git https://gitlab.freedesktop.org/drm/nova.git nova-next80708070+T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next80758071F: Documentation/gpu/nova/80768072F: drivers/gpu/nova-core/80778073···80838079Q: https://patchwork.freedesktop.org/project/nouveau/80848080B: https://gitlab.freedesktop.org/drm/nova/-/issues80858081C: irc://irc.oftc.net/nouveau80868086-T: git https://gitlab.freedesktop.org/drm/nova.git nova-next80828082+T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next80878083F: Documentation/gpu/nova/80888084F: drivers/gpu/drm/nova/80898085F: include/uapi/drm/nova_drm.h···83618357X: drivers/gpu/drm/nova/83628358X: drivers/gpu/drm/radeon/83638359X: drivers/gpu/drm/tegra/83608360+X: drivers/gpu/drm/tyr/83648361X: drivers/gpu/drm/xe/8365836283668363DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST]···1037210367F: tools/testing/selftests/filesystems/fuse/10373103681037410369FUTEX SUBSYSTEM1037510375-M: Thomas Gleixner <tglx@linutronix.de>1037010370+M: Thomas Gleixner <tglx@kernel.org>1037610371M: Ingo Molnar <mingo@redhat.com>1037710372R: Peter Zijlstra <peterz@infradead.org>1037810373R: Darren Hart <dvhart@infradead.org>···1051610511F: include/linux/arch_topology.h10517105121051810513GENERIC ENTRY CODE1051910519-M: Thomas Gleixner <tglx@linutronix.de>1051410514+M: Thomas Gleixner <tglx@kernel.org>1052010515M: Peter Zijlstra <peterz@infradead.org>1052110516M: Andy Lutomirski <luto@kernel.org>1052210517L: linux-kernel@vger.kernel.org···10629106241063010625GENERIC VDSO LIBRARY1063110626M: Andy Lutomirski <luto@kernel.org>1063210632-M: Thomas Gleixner <tglx@linutronix.de>1062710627+M: Thomas Gleixner <tglx@kernel.org>1063310628M: Vincenzo Frascino <vincenzo.frascino@arm.com>1063410629L: linux-kernel@vger.kernel.org1063510630S: Maintained···1124211237HIGH-RESOLUTION TIMERS, TIMER WHEEL, CLOCKEVENTS1124311238M: Anna-Maria Behnsen <anna-maria@linutronix.de>1124411239M: Frederic Weisbecker <frederic@kernel.org>1124511245-M: Thomas Gleixner <tglx@linutronix.de>1124011240+M: Thomas Gleixner <tglx@kernel.org>1124611241L: linux-kernel@vger.kernel.org1124711242S: Maintained1124811243T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core···1126511260R: FUJITA Tomonori <fujita.tomonori@gmail.com>1126611261R: Frederic Weisbecker <frederic@kernel.org>1126711262R: Lyude Paul <lyude@redhat.com>1126811268-R: Thomas Gleixner <tglx@linutronix.de>1126311263+R: Thomas Gleixner <tglx@kernel.org>1126911264R: Anna-Maria Behnsen <anna-maria@linutronix.de>1127011265R: John Stultz <jstultz@google.com>1127111266R: Stephen Boyd <sboyd@kernel.org>···1333513330F: sound/soc/codecs/sma*13336133311333713332IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)1333813338-M: Thomas Gleixner <tglx@linutronix.de>1333313333+M: Thomas Gleixner <tglx@kernel.org>1333913334S: Maintained1334013335T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core1334113336F: Documentation/core-api/irq/irq-domain.rst···1334513340F: kernel/irq/msi.c13346133411334713342IRQ SUBSYSTEM1334813348-M: Thomas Gleixner <tglx@linutronix.de>1334313343+M: Thomas Gleixner <tglx@kernel.org>1334913344L: linux-kernel@vger.kernel.org1335013345S: Maintained1335113346T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core···1335813353F: lib/group_cpus.c13359133541336013355IRQCHIP DRIVERS1336113361-M: Thomas Gleixner <tglx@linutronix.de>1335613356+M: Thomas Gleixner <tglx@kernel.org>1336213357L: linux-kernel@vger.kernel.org1336313358S: Maintained1336413359T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core···1445214447F: lib/*14453144481445414449LICENSES and SPDX stuff1445514455-M: Thomas Gleixner <tglx@linutronix.de>1445014450+M: Thomas Gleixner <tglx@kernel.org>1445614451M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>1445714452L: linux-spdx@vger.kernel.org1445814453S: Maintained···1828818283X: tools/testing/selftests/net/can/18289182841829018285NETWORKING [IOAM]1829118291-M: Justin Iurman <justin.iurman@uliege.be>1828618286+M: Justin Iurman <justin.iurman@gmail.com>1829218287S: Maintained1829318288F: Documentation/networking/ioam6*1829418289F: include/linux/ioam6*···1857718572M: Anna-Maria Behnsen <anna-maria@linutronix.de>1857818573M: Frederic Weisbecker <frederic@kernel.org>1857918574M: Ingo Molnar <mingo@kernel.org>1858018580-M: Thomas Gleixner <tglx@linutronix.de>1857518575+M: Thomas Gleixner <tglx@kernel.org>1858118576L: linux-kernel@vger.kernel.org1858218577S: Maintained1858318578T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/nohz···19552195471955319548OPEN FIRMWARE AND FLATTENED DEVICE TREE1955419549M: Rob Herring <robh@kernel.org>1955519555-M: Saravana Kannan <saravanak@google.com>1955019550+M: Saravana Kannan <saravanak@kernel.org>1955619551L: devicetree@vger.kernel.org1955719552S: Maintained1955819553Q: http://patchwork.kernel.org/project/devicetree/list/···2076220757POSIX CLOCKS and TIMERS2076320758M: Anna-Maria Behnsen <anna-maria@linutronix.de>2076420759M: Frederic Weisbecker <frederic@kernel.org>2076520765-M: Thomas Gleixner <tglx@linutronix.de>2076020760+M: Thomas Gleixner <tglx@kernel.org>2076620761L: linux-kernel@vger.kernel.org2076720762S: Maintained2076820763T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core···26273262682627426269TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER2627526270M: John Stultz <jstultz@google.com>2627626276-M: Thomas Gleixner <tglx@linutronix.de>2627126271+M: Thomas Gleixner <tglx@kernel.org>2627726272R: Stephen Boyd <sboyd@kernel.org>2627826273L: linux-kernel@vger.kernel.org2627926274S: Supported···2820428199F: net/x25/28205282002820628201X86 ARCHITECTURE (32-BIT AND 64-BIT)2820728207-M: Thomas Gleixner <tglx@linutronix.de>2820228202+M: Thomas Gleixner <tglx@kernel.org>2820828203M: Ingo Molnar <mingo@redhat.com>2820928204M: Borislav Petkov <bp@alien8.de>2821028205M: Dave Hansen <dave.hansen@linux.intel.com>···28220282152822128216X86 CPUID DATABASE2822228217M: Borislav Petkov <bp@alien8.de>2822328223-M: Thomas Gleixner <tglx@linutronix.de>2821828218+M: Thomas Gleixner <tglx@kernel.org>2822428219M: x86@kernel.org2822528220R: Ahmed S. Darwish <darwi@linutronix.de>2822628221L: x86-cpuid@lists.linux.dev···2823628231F: arch/x86/entry/28237282322823828233X86 HARDWARE VULNERABILITIES2823928239-M: Thomas Gleixner <tglx@linutronix.de>2823428234+M: Thomas Gleixner <tglx@kernel.org>2824028235M: Borislav Petkov <bp@alien8.de>2824128236M: Peter Zijlstra <peterz@infradead.org>2824228237M: Josh Poimboeuf <jpoimboe@kernel.org>
···11// SPDX-License-Identifier: (GPL-2.0 OR MIT)22/*33- * bcm2712-rpi-5-b-ovl-rp1.dts is the overlay-ready DT which will make44- * the RP1 driver to load the RP1 dtb overlay at runtime, while55- * bcm2712-rpi-5-b.dts (this file) is the fully defined one (i.e. it66- * already contains RP1 node, so no overlay is loaded nor needed).77- * This file is intended to host the override nodes for the RP1 peripherals,88- * e.g. to declare the phy of the ethernet interface or the custom pin setup99- * for several RP1 peripherals.1010- * This in turn is due to the fact that there's no current generic1111- * infrastructure to reference nodes (i.e. the nodes in rp1-common.dtsi) that1212- * are not yet defined in the DT since they are loaded at runtime via overlay.33+ * As a loose attempt to separate RP1 customizations from SoC peripherals44+ * definitioni, this file is intended to host the override nodes for the RP155+ * peripherals, e.g. to declare the phy of the ethernet interface or custom66+ * pin setup.137 * All other nodes that do not have anything to do with RP1 should be added1414- * to the included bcm2712-rpi-5-b-ovl-rp1.dts instead.88+ * to the included bcm2712-rpi-5-b-base.dtsi instead.159 */16101711/dts-v1/;18121919-#include "bcm2712-rpi-5-b-ovl-rp1.dts"1313+#include "bcm2712-rpi-5-b-base.dtsi"20142115/ {2216 aliases {···1925};20262127&pcie2 {2222- #include "rp1-nexus.dtsi"2828+ pci@0,0 {2929+ reg = <0x0 0x0 0x0 0x0 0x0>;3030+ ranges;3131+ bus-range = <0 1>;3232+ device_type = "pci";3333+ #address-cells = <3>;3434+ #size-cells = <2>;3535+3636+ dev@0,0 {3737+ compatible = "pci1de4,1";3838+ reg = <0x10000 0x0 0x0 0x0 0x0>;3939+ ranges = <0x1 0x0 0x0 0x82010000 0x0 0x0 0x0 0x400000>;4040+ interrupt-controller;4141+ #interrupt-cells = <2>;4242+ #address-cells = <3>;4343+ #size-cells = <2>;4444+4545+ #include "rp1-common.dtsi"4646+ };4747+ };2348};24492550&rp1_eth {
···31313232endif33333434-ifdef CONFIG_RELOCATABLE3535-$(obj)/Image: vmlinux.unstripped FORCE3636-else3734$(obj)/Image: vmlinux FORCE3838-endif3935 $(call if_changed,objcopy)40364137$(obj)/Image.gz: $(obj)/Image FORCE
-2
arch/riscv/configs/nommu_k210_defconfig
···5555# CONFIG_HW_RANDOM is not set5656# CONFIG_DEVMEM is not set5757CONFIG_I2C=y5858-# CONFIG_I2C_COMPAT is not set5958CONFIG_I2C_CHARDEV=y6059# CONFIG_I2C_HELPER_AUTO is not set6160CONFIG_I2C_DESIGNWARE_CORE=y···8889# CONFIG_FRAME_POINTER is not set8990# CONFIG_DEBUG_MISC is not set9091CONFIG_PANIC_ON_OOPS=y9191-# CONFIG_SCHED_DEBUG is not set9292# CONFIG_RCU_TRACE is not set9393# CONFIG_FTRACE is not set9494# CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_k210_sdcard_defconfig
···8686# CONFIG_FRAME_POINTER is not set8787# CONFIG_DEBUG_MISC is not set8888CONFIG_PANIC_ON_OOPS=y8989-# CONFIG_SCHED_DEBUG is not set9089# CONFIG_RCU_TRACE is not set9190# CONFIG_FTRACE is not set9291# CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_virt_defconfig
···6666# CONFIG_MISC_FILESYSTEMS is not set6767CONFIG_LSM="[]"6868CONFIG_PRINTK_TIME=y6969-# CONFIG_SCHED_DEBUG is not set7069# CONFIG_RCU_TRACE is not set7170# CONFIG_FTRACE is not set7271# CONFIG_RUNTIME_TESTING_MENU is not set
···8585 int ret;86868787 ret = sbi_hsm_hart_stop();8888- pr_crit("Unable to stop the cpu %u (%d)\n", smp_processor_id(), ret);8888+ pr_crit("Unable to stop the cpu %d (%d)\n", smp_processor_id(), ret);8989}90909191static int sbi_cpu_is_stopped(unsigned int cpuid)
···2222 if (!h || kernel_len < sizeof(*h))2323 return -EINVAL;24242525- /* According to Documentation/riscv/boot-image-header.rst,2525+ /* According to Documentation/arch/riscv/boot-image-header.rst,2626 * use "magic2" field to check when version >= 0.2.2727 */2828
···339339340340 add_random_kstack_offset();341341342342- if (syscall >= 0 && syscall < NR_syscalls)342342+ if (syscall >= 0 && syscall < NR_syscalls) {343343+ syscall = array_index_nospec(syscall, NR_syscalls);343344 syscall_handler(regs, syscall);345345+ }344346345347 /*346348 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+1-1
arch/sh/kernel/perf_event.c
···77 * Heavily based on the x86 and PowerPC implementations.88 *99 * x86:1010- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>1010+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1111 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1212 * Copyright (C) 2009 Jaswinder Singh Rajput1313 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+23
arch/sparc/kernel/pci.c
···181181182182__setup("ofpci_debug=", ofpci_debug);183183184184+static void of_fixup_pci_pref(struct pci_dev *dev, int index,185185+ struct resource *res)186186+{187187+ struct pci_bus_region region;188188+189189+ if (!(res->flags & IORESOURCE_MEM_64))190190+ return;191191+192192+ if (!resource_size(res))193193+ return;194194+195195+ pcibios_resource_to_bus(dev->bus, ®ion, res);196196+ if (region.end <= ~((u32)0))197197+ return;198198+199199+ if (!(res->flags & IORESOURCE_PREFETCH)) {200200+ res->flags |= IORESOURCE_PREFETCH;201201+ pci_info(dev, "reg 0x%x: fixup: pref added to 64-bit resource\n",202202+ index);203203+ }204204+}205205+184206static unsigned long pci_parse_of_flags(u32 addr0)185207{186208 unsigned long flags = 0;···266244 res->end = op_res->end;267245 res->flags = flags;268246 res->name = pci_name(dev);247247+ of_fixup_pci_pref(dev, i, res);269248270249 pci_info(dev, "reg 0x%x: %pR\n", i, res);271250 }
+1-1
arch/sparc/kernel/perf_event.c
···66 * This code is based almost entirely upon the x86 perf event77 * code, which is:88 *99- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>99+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1010 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1111 * Copyright (C) 2009 Jaswinder Singh Rajput1212 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+2
arch/x86/coco/sev/Makefile
···88# GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining99KASAN_SANITIZE_noinstr.o := n1010KCSAN_SANITIZE_noinstr.o := n1111+1212+GCOV_PROFILE_noinstr.o := n
+1-1
arch/x86/events/core.c
···11/*22 * Performance events x86 architecture code33 *44- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>44+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>55 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar66 * Copyright (C) 2009 Jaswinder Singh Rajput77 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1-1
arch/x86/events/perf_event.h
···11/*22 * Performance events x86 architecture header33 *44- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>44+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>55 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar66 * Copyright (C) 2009 Jaswinder Singh Rajput77 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1-1
arch/x86/kernel/x86_init.c
···11/*22- * Copyright (C) 2009 Thomas Gleixner <tglx@linutronix.de>22+ * Copyright (C) 2009 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>33 *44 * For licencing details see kernel-base/COPYING55 */
+1-1
arch/x86/mm/pti.c
···1515 * Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at>1616 *1717 * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com>1818- * Mostly rewritten by Thomas Gleixner <tglx@linutronix.de> and1818+ * Mostly rewritten by Thomas Gleixner <tglx@kernel.org> and1919 * Andy Lutomirsky <luto@amacapital.net>2020 */2121#include <linux/kernel.h>
+18-5
block/blk-integrity.c
···140140bool blk_integrity_merge_rq(struct request_queue *q, struct request *req,141141 struct request *next)142142{143143+ struct bio_integrity_payload *bip, *bip_next;144144+143145 if (blk_integrity_rq(req) == 0 && blk_integrity_rq(next) == 0)144146 return true;145147146148 if (blk_integrity_rq(req) == 0 || blk_integrity_rq(next) == 0)147149 return false;148150149149- if (bio_integrity(req->bio)->bip_flags !=150150- bio_integrity(next->bio)->bip_flags)151151+ bip = bio_integrity(req->bio);152152+ bip_next = bio_integrity(next->bio);153153+ if (bip->bip_flags != bip_next->bip_flags)154154+ return false;155155+156156+ if (bip->bip_flags & BIP_CHECK_APPTAG &&157157+ bip->app_tag != bip_next->app_tag)151158 return false;152159153160 if (req->nr_integrity_segments + next->nr_integrity_segments >···170163bool blk_integrity_merge_bio(struct request_queue *q, struct request *req,171164 struct bio *bio)172165{166166+ struct bio_integrity_payload *bip, *bip_bio = bio_integrity(bio);173167 int nr_integrity_segs;174168175175- if (blk_integrity_rq(req) == 0 && bio_integrity(bio) == NULL)169169+ if (blk_integrity_rq(req) == 0 && bip_bio == NULL)176170 return true;177171178178- if (blk_integrity_rq(req) == 0 || bio_integrity(bio) == NULL)172172+ if (blk_integrity_rq(req) == 0 || bip_bio == NULL)179173 return false;180174181181- if (bio_integrity(req->bio)->bip_flags != bio_integrity(bio)->bip_flags)175175+ bip = bio_integrity(req->bio);176176+ if (bip->bip_flags != bip_bio->bip_flags)177177+ return false;178178+179179+ if (bip->bip_flags & BIP_CHECK_APPTAG &&180180+ bip->app_tag != bip_bio->app_tag)182181 return false;183182184183 nr_integrity_segs = blk_rq_count_integrity_sg(q, bio);
+1-2
block/blk-mq.c
···45534553 * Make sure reading the old queue_hw_ctx from other45544554 * context concurrently won't trigger uaf.45554555 */45564556- synchronize_rcu_expedited();45574557- kfree(hctxs);45564556+ kfree_rcu_mightsleep(hctxs);45584557 hctxs = new_hctxs;45594558 }45604559
+9-16
block/blk-rq-qos.h
···112112113113static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)114114{115115- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&116116- q->rq_qos)115115+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)117116 __rq_qos_cleanup(q->rq_qos, bio);118117}119118120119static inline void rq_qos_done(struct request_queue *q, struct request *rq)121120{122122- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&123123- q->rq_qos && !blk_rq_is_passthrough(rq))121121+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) &&122122+ q->rq_qos && !blk_rq_is_passthrough(rq))124123 __rq_qos_done(q->rq_qos, rq);125124}126125127126static inline void rq_qos_issue(struct request_queue *q, struct request *rq)128127{129129- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&130130- q->rq_qos)128128+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)131129 __rq_qos_issue(q->rq_qos, rq);132130}133131134132static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)135133{136136- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&137137- q->rq_qos)134134+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)138135 __rq_qos_requeue(q->rq_qos, rq);139136}140137···159162160163static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)161164{162162- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&163163- q->rq_qos) {165165+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) {164166 bio_set_flag(bio, BIO_QOS_THROTTLED);165167 __rq_qos_throttle(q->rq_qos, bio);166168 }···168172static inline void rq_qos_track(struct request_queue *q, struct request *rq,169173 struct bio *bio)170174{171171- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&172172- q->rq_qos)175175+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)173176 __rq_qos_track(q->rq_qos, rq, bio);174177}175178176179static inline void rq_qos_merge(struct request_queue *q, struct request *rq,177180 struct bio *bio)178181{179179- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&180180- q->rq_qos) {182182+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) {181183 bio_set_flag(bio, BIO_QOS_MERGED);182184 __rq_qos_merge(q->rq_qos, rq, bio);183185 }···183189184190static inline void rq_qos_queue_depth_changed(struct request_queue *q)185191{186186- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&187187- q->rq_qos)192192+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)188193 __rq_qos_queue_depth_changed(q->rq_qos);189194}190195
+11-8
drivers/acpi/pci_irq.c
···188188 * the IRQ value, which is hardwired to specific interrupt inputs on189189 * the interrupt controller.190190 */191191- pr_debug("%04x:%02x:%02x[%c] -> %s[%d]\n",191191+ pr_debug("%04x:%02x:%02x[%c] -> %s[%u]\n",192192 entry->id.segment, entry->id.bus, entry->id.device,193193 pin_name(entry->pin), prt->source, entry->index);194194···384384int acpi_pci_irq_enable(struct pci_dev *dev)385385{386386 struct acpi_prt_entry *entry;387387- int gsi;387387+ u32 gsi;388388 u8 pin;389389 int triggering = ACPI_LEVEL_SENSITIVE;390390 /*···422422 return 0;423423 }424424425425+ rc = -ENODEV;426426+425427 if (entry) {426428 if (entry->link)427427- gsi = acpi_pci_link_allocate_irq(entry->link,429429+ rc = acpi_pci_link_allocate_irq(entry->link,428430 entry->index,429431 &triggering, &polarity,430430- &link);431431- else432432+ &link, &gsi);433433+ else {432434 gsi = entry->index;433433- } else434434- gsi = -1;435435+ rc = 0;436436+ }437437+ }435438436436- if (gsi < 0) {439439+ if (rc < 0) {437440 /*438441 * No IRQ known to the ACPI subsystem - maybe the BIOS /439442 * driver reported one, then use it. Exit in any case.
+25-14
drivers/acpi/pci_link.c
···448448 /* >IRQ15 */449449};450450451451-static int acpi_irq_pci_sharing_penalty(int irq)451451+static int acpi_irq_pci_sharing_penalty(u32 irq)452452{453453 struct acpi_pci_link *link;454454 int penalty = 0;···474474 return penalty;475475}476476477477-static int acpi_irq_get_penalty(int irq)477477+static int acpi_irq_get_penalty(u32 irq)478478{479479 int penalty = 0;480480···528528static int acpi_pci_link_allocate(struct acpi_pci_link *link)529529{530530 acpi_handle handle = link->device->handle;531531- int irq;531531+ u32 irq;532532 int i;533533534534 if (link->irq.initialized) {···598598 return 0;599599}600600601601-/*602602- * acpi_pci_link_allocate_irq603603- * success: return IRQ >= 0604604- * failure: return -1601601+/**602602+ * acpi_pci_link_allocate_irq(): Retrieve a link device GSI603603+ *604604+ * @handle: Handle for the link device605605+ * @index: GSI index606606+ * @triggering: pointer to store the GSI trigger607607+ * @polarity: pointer to store GSI polarity608608+ * @name: pointer to store link device name609609+ * @gsi: pointer to store GSI number610610+ *611611+ * Returns:612612+ * 0 on success with @triggering, @polarity, @name, @gsi initialized.613613+ * -ENODEV on failure605614 */606615int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering,607607- int *polarity, char **name)616616+ int *polarity, char **name, u32 *gsi)608617{609618 struct acpi_device *device = acpi_fetch_acpi_dev(handle);610619 struct acpi_pci_link *link;611620612621 if (!device) {613622 acpi_handle_err(handle, "Invalid link device\n");614614- return -1;623623+ return -ENODEV;615624 }616625617626 link = acpi_driver_data(device);618627 if (!link) {619628 acpi_handle_err(handle, "Invalid link context\n");620620- return -1;629629+ return -ENODEV;621630 }622631623632 /* TBD: Support multiple index (IRQ) entries per Link Device */624633 if (index) {625634 acpi_handle_err(handle, "Invalid index %d\n", index);626626- return -1;635635+ return -ENODEV;627636 }628637629638 mutex_lock(&acpi_link_lock);630639 if (acpi_pci_link_allocate(link)) {631640 mutex_unlock(&acpi_link_lock);632632- return -1;641641+ return -ENODEV;633642 }634643635644 if (!link->irq.active) {636645 mutex_unlock(&acpi_link_lock);637646 acpi_handle_err(handle, "Link active IRQ is 0!\n");638638- return -1;647647+ return -ENODEV;639648 }640649 link->refcnt++;641650 mutex_unlock(&acpi_link_lock);···656647 if (name)657648 *name = acpi_device_bid(link->device);658649 acpi_handle_debug(handle, "Link is referenced\n");659659- return link->irq.active;650650+ *gsi = link->irq.active;651651+652652+ return 0;660653}661654662655/*
-3
drivers/android/binder/page_range.rs
···727727 drop(mm);728728 drop(page);729729730730- // SAFETY: We just unlocked the lru lock, but it should be locked when we return.731731- unsafe { bindings::spin_lock(&raw mut (*lru).lock) };732732-733730 LRU_REMOVED_ENTRY734731}
···943943 DECLARE_BITMAP(old_stat, MAX_LINE);944944 DECLARE_BITMAP(cur_stat, MAX_LINE);945945 DECLARE_BITMAP(new_stat, MAX_LINE);946946+ DECLARE_BITMAP(int_stat, MAX_LINE);946947 DECLARE_BITMAP(trigger, MAX_LINE);947948 DECLARE_BITMAP(edges, MAX_LINE);948949 int ret;949950951951+ if (chip->driver_data & PCA_PCAL) {952952+ /* Read INT_STAT before it is cleared by the input-port read. */953953+ ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, int_stat);954954+ if (ret)955955+ return false;956956+ }957957+950958 ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);951959 if (ret)952960 return false;961961+962962+ if (chip->driver_data & PCA_PCAL) {963963+ /* Detect short pulses via INT_STAT. */964964+ bitmap_and(trigger, int_stat, chip->irq_mask, gc->ngpio);965965+966966+ /* Apply filter for rising/falling edge selection. */967967+ bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise,968968+ cur_stat, gc->ngpio);969969+970970+ bitmap_and(int_stat, new_stat, trigger, gc->ngpio);971971+ } else {972972+ bitmap_zero(int_stat, gc->ngpio);973973+ }953974954975 /* Remove output pins from the equation */955976 pca953x_read_regs(chip, chip->regs->direction, reg_direction);···985964986965 if (bitmap_empty(chip->irq_trig_level_high, gc->ngpio) &&987966 bitmap_empty(chip->irq_trig_level_low, gc->ngpio)) {988988- if (bitmap_empty(trigger, gc->ngpio))967967+ if (bitmap_empty(trigger, gc->ngpio) &&968968+ bitmap_empty(int_stat, gc->ngpio))989969 return false;990970 }991971···994972 bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio);995973 bitmap_or(edges, old_stat, cur_stat, gc->ngpio);996974 bitmap_and(pending, edges, trigger, gc->ngpio);975975+ bitmap_or(pending, pending, int_stat, gc->ngpio);997976998977 bitmap_and(cur_stat, new_stat, chip->irq_trig_level_high, gc->ngpio);999978 bitmap_and(cur_stat, cur_stat, chip->irq_mask, gc->ngpio);
+1
drivers/gpio/gpio-rockchip.c
···593593 gc->ngpio = bank->nr_pins;594594 gc->label = bank->name;595595 gc->parent = bank->dev;596596+ gc->can_sleep = true;596597597598 ret = gpiochip_add_data(gc, bank);598599 if (ret) {
+180-71
drivers/gpio/gpiolib-shared.c
···3838 int dev_id;3939 /* Protects the auxiliary device struct and the lookup table. */4040 struct mutex lock;4141+ struct lock_class_key lock_key;4142 struct auxiliary_device adev;4243 struct gpiod_lookup_table *lookup;4444+ bool is_reset_gpio;4345};44464547/* Represents a single GPIO pin. */···7876 return NULL;7977}80787979+static struct gpio_shared_ref *gpio_shared_make_ref(struct fwnode_handle *fwnode,8080+ const char *con_id,8181+ enum gpiod_flags flags)8282+{8383+ char *con_id_cpy __free(kfree) = NULL;8484+8585+ struct gpio_shared_ref *ref __free(kfree) = kzalloc(sizeof(*ref), GFP_KERNEL);8686+ if (!ref)8787+ return NULL;8888+8989+ if (con_id) {9090+ con_id_cpy = kstrdup(con_id, GFP_KERNEL);9191+ if (!con_id_cpy)9292+ return NULL;9393+ }9494+9595+ ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL);9696+ if (ref->dev_id < 0)9797+ return NULL;9898+9999+ ref->flags = flags;100100+ ref->con_id = no_free_ptr(con_id_cpy);101101+ ref->fwnode = fwnode;102102+ lockdep_register_key(&ref->lock_key);103103+ mutex_init_with_key(&ref->lock, &ref->lock_key);104104+105105+ return no_free_ptr(ref);106106+}107107+108108+static int gpio_shared_setup_reset_proxy(struct gpio_shared_entry *entry,109109+ enum gpiod_flags flags)110110+{111111+ struct gpio_shared_ref *ref;112112+113113+ list_for_each_entry(ref, &entry->refs, list) {114114+ if (ref->is_reset_gpio)115115+ /* Already set-up. */116116+ return 0;117117+ }118118+119119+ ref = gpio_shared_make_ref(NULL, "reset", flags);120120+ if (!ref)121121+ return -ENOMEM;122122+123123+ ref->is_reset_gpio = true;124124+125125+ list_add_tail(&ref->list, &entry->refs);126126+127127+ pr_debug("Created a secondary shared GPIO reference for potential reset-gpio device for GPIO %u at %s\n",128128+ entry->offset, fwnode_get_name(entry->fwnode));129129+130130+ return 0;131131+}132132+81133/* Handle all special nodes that we should ignore. */82134static bool gpio_shared_of_node_ignore(struct device_node *node)83135{···162106 size_t con_id_len, suffix_len;163107 struct fwnode_handle *fwnode;164108 struct of_phandle_args args;109109+ struct gpio_shared_ref *ref;165110 struct property *prop;166111 unsigned int offset;167112 const char *suffix;···195138196139 for (i = 0; i < count; i++) {197140 struct device_node *np __free(device_node) = NULL;141141+ char *con_id __free(kfree) = NULL;198142199143 ret = of_parse_phandle_with_args(curr, prop->name,200144 "#gpio-cells", i,···240182 list_add_tail(&entry->list, &gpio_shared_list);241183 }242184243243- struct gpio_shared_ref *ref __free(kfree) =244244- kzalloc(sizeof(*ref), GFP_KERNEL);245245- if (!ref)246246- return -ENOMEM;247247-248248- ref->fwnode = fwnode_handle_get(of_fwnode_handle(curr));249249- ref->flags = args.args[1];250250- mutex_init(&ref->lock);251251-252185 if (strends(prop->name, "gpios"))253186 suffix = "-gpios";254187 else if (strends(prop->name, "gpio"))···251202252203 /* We only set con_id if there's actually one. */253204 if (strcmp(prop->name, "gpios") && strcmp(prop->name, "gpio")) {254254- ref->con_id = kstrdup(prop->name, GFP_KERNEL);255255- if (!ref->con_id)205205+ con_id = kstrdup(prop->name, GFP_KERNEL);206206+ if (!con_id)256207 return -ENOMEM;257208258258- con_id_len = strlen(ref->con_id);209209+ con_id_len = strlen(con_id);259210 suffix_len = strlen(suffix);260211261261- ref->con_id[con_id_len - suffix_len] = '\0';212212+ con_id[con_id_len - suffix_len] = '\0';262213 }263214264264- ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL);265265- if (ref->dev_id < 0) {266266- kfree(ref->con_id);215215+ ref = gpio_shared_make_ref(fwnode_handle_get(of_fwnode_handle(curr)),216216+ con_id, args.args[1]);217217+ if (!ref)267218 return -ENOMEM;268268- }269219270220 if (!list_empty(&entry->refs))271221 pr_debug("GPIO %u at %s is shared by multiple firmware nodes\n",272222 entry->offset, fwnode_get_name(entry->fwnode));273223274274- list_add_tail(&no_free_ptr(ref)->list, &entry->refs);224224+ list_add_tail(&ref->list, &entry->refs);225225+226226+ if (strcmp(prop->name, "reset-gpios") == 0) {227227+ ret = gpio_shared_setup_reset_proxy(entry, args.args[1]);228228+ if (ret)229229+ return ret;230230+ }275231 }276232 }277233···360306 struct fwnode_handle *reset_fwnode = dev_fwnode(consumer);361307 struct fwnode_reference_args ref_args, aux_args;362308 struct device *parent = consumer->parent;309309+ struct gpio_shared_ref *real_ref;363310 bool match;364311 int ret;365312313313+ lockdep_assert_held(&ref->lock);314314+366315 /* The reset-gpio device must have a parent AND a firmware node. */367316 if (!parent || !reset_fwnode)368368- return false;369369-370370- /*371371- * FIXME: use device_is_compatible() once the reset-gpio drivers gains372372- * a compatible string which it currently does not have.373373- */374374- if (!strstarts(dev_name(consumer), "reset.gpio."))375317 return false;376318377319 /*···378328 return false;379329380330 /*381381- * The device associated with the shared reference's firmware node is382382- * the consumer of the reset control exposed by the reset-gpio device.383383- * It must have a "reset-gpios" property that's referencing the entry's384384- * firmware node.385385- *386386- * The reference args must agree between the real consumer and the387387- * auxiliary reset-gpio device.331331+ * Now we need to find the actual pin we want to assign to this332332+ * reset-gpio device. To that end: iterate over the list of references333333+ * of this entry and see if there's one, whose reset-gpios property's334334+ * arguments match the ones from this consumer's node.388335 */389389- ret = fwnode_property_get_reference_args(ref->fwnode, "reset-gpios",390390- NULL, 2, 0, &ref_args);391391- if (ret)392392- return false;336336+ list_for_each_entry(real_ref, &entry->refs, list) {337337+ if (real_ref == ref)338338+ continue;393339394394- ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios",395395- NULL, 2, 0, &aux_args);396396- if (ret) {340340+ guard(mutex)(&real_ref->lock);341341+342342+ if (!real_ref->fwnode)343343+ continue;344344+345345+ /*346346+ * The device associated with the shared reference's firmware347347+ * node is the consumer of the reset control exposed by the348348+ * reset-gpio device. It must have a "reset-gpios" property349349+ * that's referencing the entry's firmware node.350350+ *351351+ * The reference args must agree between the real consumer and352352+ * the auxiliary reset-gpio device.353353+ */354354+ ret = fwnode_property_get_reference_args(real_ref->fwnode,355355+ "reset-gpios",356356+ NULL, 2, 0, &ref_args);357357+ if (ret)358358+ continue;359359+360360+ ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios",361361+ NULL, 2, 0, &aux_args);362362+ if (ret) {363363+ fwnode_handle_put(ref_args.fwnode);364364+ continue;365365+ }366366+367367+ match = ((ref_args.fwnode == entry->fwnode) &&368368+ (aux_args.fwnode == entry->fwnode) &&369369+ (ref_args.args[0] == aux_args.args[0]));370370+397371 fwnode_handle_put(ref_args.fwnode);398398- return false;372372+ fwnode_handle_put(aux_args.fwnode);373373+374374+ if (!match)375375+ continue;376376+377377+ /*378378+ * Reuse the fwnode of the real device, next time we'll use it379379+ * in the normal path.380380+ */381381+ ref->fwnode = fwnode_handle_get(reset_fwnode);382382+ return true;399383 }400384401401- match = ((ref_args.fwnode == entry->fwnode) &&402402- (aux_args.fwnode == entry->fwnode) &&403403- (ref_args.args[0] == aux_args.args[0]));404404-405405- fwnode_handle_put(ref_args.fwnode);406406- fwnode_handle_put(aux_args.fwnode);407407- return match;385385+ return false;408386}409387#else410388static bool gpio_shared_dev_is_reset_gpio(struct device *consumer,···443365}444366#endif /* CONFIG_RESET_GPIO */445367446446-int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags)368368+int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id,369369+ unsigned long lflags)447370{448371 const char *dev_id = dev_name(consumer);372372+ struct gpiod_lookup_table *lookup;449373 struct gpio_shared_entry *entry;450374 struct gpio_shared_ref *ref;451375452452- struct gpiod_lookup_table *lookup __free(kfree) =453453- kzalloc(struct_size(lookup, table, 2), GFP_KERNEL);454454- if (!lookup)455455- return -ENOMEM;456456-457376 list_for_each_entry(entry, &gpio_shared_list, list) {458377 list_for_each_entry(ref, &entry->refs, list) {459459- if (!device_match_fwnode(consumer, ref->fwnode) &&460460- !gpio_shared_dev_is_reset_gpio(consumer, entry, ref))461461- continue;462462-463378 guard(mutex)(&ref->lock);379379+380380+ /*381381+ * FIXME: use device_is_compatible() once the reset-gpio382382+ * drivers gains a compatible string which it currently383383+ * does not have.384384+ */385385+ if (!ref->fwnode && strstarts(dev_name(consumer), "reset.gpio.")) {386386+ if (!gpio_shared_dev_is_reset_gpio(consumer, entry, ref))387387+ continue;388388+ } else if (!device_match_fwnode(consumer, ref->fwnode)) {389389+ continue;390390+ }391391+392392+ if ((!con_id && ref->con_id) || (con_id && !ref->con_id) ||393393+ (con_id && ref->con_id && strcmp(con_id, ref->con_id) != 0))394394+ continue;464395465396 /* We've already done that on a previous request. */466397 if (ref->lookup)···482395 if (!key)483396 return -ENOMEM;484397398398+ lookup = kzalloc(struct_size(lookup, table, 2), GFP_KERNEL);399399+ if (!lookup)400400+ return -ENOMEM;401401+485402 pr_debug("Adding machine lookup entry for a shared GPIO for consumer %s, with key '%s' and con_id '%s'\n",486403 dev_id, key, ref->con_id ?: "none");487404···493402 lookup->table[0] = GPIO_LOOKUP(no_free_ptr(key), 0,494403 ref->con_id, lflags);495404496496- ref->lookup = no_free_ptr(lookup);405405+ ref->lookup = lookup;497406 gpiod_add_lookup_table(ref->lookup);498407499408 return 0;···557466 entry->offset, gpio_device_get_label(gdev));558467559468 list_for_each_entry(ref, &entry->refs, list) {560560- pr_debug("Setting up a shared GPIO entry for %s\n",561561- fwnode_get_name(ref->fwnode));469469+ pr_debug("Setting up a shared GPIO entry for %s (con_id: '%s')\n",470470+ fwnode_get_name(ref->fwnode) ?: "(no fwnode)",471471+ ref->con_id ?: "(none)");562472563473 ret = gpio_shared_make_adev(gdev, entry, ref);564474 if (ret)···579487 if (!device_match_fwnode(&gdev->dev, entry->fwnode))580488 continue;581489582582- /*583583- * For some reason if we call synchronize_srcu() in GPIO core,584584- * descent here and take this mutex and then recursively call585585- * synchronize_srcu() again from gpiochip_remove() (which is586586- * totally fine) called after gpio_shared_remove_adev(),587587- * lockdep prints a false positive deadlock splat. Disable588588- * lockdep here.589589- */590590- lockdep_off();591490 list_for_each_entry(ref, &entry->refs, list) {592491 guard(mutex)(&ref->lock);593492···591508592509 gpio_shared_remove_adev(&ref->adev);593510 }594594- lockdep_on();595511 }596512}597513···686604{687605 list_del(&ref->list);688606 mutex_destroy(&ref->lock);607607+ lockdep_unregister_key(&ref->lock_key);689608 kfree(ref->con_id);690609 ida_free(&gpio_shared_ida, ref->dev_id);691610 fwnode_handle_put(ref->fwnode);···718635 }719636}720637638638+static bool gpio_shared_entry_is_really_shared(struct gpio_shared_entry *entry)639639+{640640+ size_t num_nodes = list_count_nodes(&entry->refs);641641+ struct gpio_shared_ref *ref;642642+643643+ if (num_nodes <= 1)644644+ return false;645645+646646+ if (num_nodes > 2)647647+ return true;648648+649649+ /* Exactly two references: */650650+ list_for_each_entry(ref, &entry->refs, list) {651651+ /*652652+ * Corner-case: the second reference comes from the potential653653+ * reset-gpio instance. However, this pin is not really shared654654+ * as it would have three references in this case. Avoid655655+ * creating unnecessary proxies.656656+ */657657+ if (ref->is_reset_gpio)658658+ return false;659659+ }660660+661661+ return true;662662+}663663+721664static void gpio_shared_free_exclusive(void)722665{723666 struct gpio_shared_entry *entry, *epos;724667725668 list_for_each_entry_safe(entry, epos, &gpio_shared_list, list) {726726- if (list_count_nodes(&entry->refs) > 1)669669+ if (gpio_shared_entry_is_really_shared(entry))727670 continue;728671729672 gpio_shared_drop_ref(list_first_entry(&entry->refs,
···3838struct isp_funcs {3939 int (*hw_init)(struct amdgpu_isp *isp);4040 int (*hw_fini)(struct amdgpu_isp *isp);4141+ int (*hw_suspend)(struct amdgpu_isp *isp);4242+ int (*hw_resume)(struct amdgpu_isp *isp);4143};42444345struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
···201201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?202202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN;203203 break;204204+ case AMDGPU_HW_IP_VPE:205205+ type = AMD_IP_BLOCK_TYPE_VPE;206206+ break;204207 default:205208 type = AMD_IP_BLOCK_TYPE_NUM;206209 break;···723720 break;724721 case AMD_IP_BLOCK_TYPE_UVD:725722 count = adev->uvd.num_uvd_inst;723723+ break;724724+ case AMD_IP_BLOCK_TYPE_VPE:725725+ count = adev->vpe.num_instances;726726 break;727727 /* For all other IP block types not listed in the switch statement728728 * the ip status is valid here and the instance count is one.
+6-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
···144144 struct amdgpu_ring *ring;145145 ktime_t start_timestamp;146146147147- /* wptr for the fence for resets */147147+ /* wptr for the total submission for resets */148148 u64 wptr;149149 /* fence context for resets */150150 u64 context;151151+ /* has this fence been reemitted */152152+ unsigned int reemitted;153153+ /* wptr for the fence for the submission */154154+ u64 fence_wptr_start;155155+ u64 fence_wptr_end;151156};152157153158extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
···2626 */27272828#include <linux/gpio/machine.h>2929+#include <linux/pm_runtime.h>2930#include "amdgpu.h"3031#include "isp_v4_1_1.h"3132···146145 return -ENODEV;147146 }148147148148+ /* The devices will be managed by the pm ops from the parent */149149+ dev_pm_syscore_device(dev, true);150150+149151exit:150152 /* Continue to add */151153 return 0;···181177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret);182178 return -ENODEV;183179 }180180+ dev_pm_syscore_device(dev, false);184181185182exit:186183 /* Continue to remove */187184 return 0;185185+}186186+187187+static int isp_suspend_device(struct device *dev, void *data)188188+{189189+ return pm_runtime_force_suspend(dev);190190+}191191+192192+static int isp_resume_device(struct device *dev, void *data)193193+{194194+ return pm_runtime_force_resume(dev);195195+}196196+197197+static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp)198198+{199199+ int r;200200+201201+ r = device_for_each_child(isp->parent, NULL,202202+ isp_suspend_device);203203+ if (r)204204+ dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r);205205+206206+ return r;207207+}208208+209209+static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp)210210+{211211+ int r;212212+213213+ r = device_for_each_child(isp->parent, NULL,214214+ isp_resume_device);215215+ if (r)216216+ dev_err(isp->parent, "failed to resume hw device (%d)\n", r);217217+218218+ return r;188219}189220190221static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp)···408369static const struct isp_funcs isp_v4_1_1_funcs = {409370 .hw_init = isp_v4_1_1_hw_init,410371 .hw_fini = isp_v4_1_1_hw_fini,372372+ .hw_suspend = isp_v4_1_1_hw_suspend,373373+ .hw_resume = isp_v4_1_1_hw_resume,411374};412375413376void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
···2143214321442144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset);21452145 if (!ret) {21462146- if (amdgpu_emu_mode == 1)21462146+ if (amdgpu_emu_mode == 1) {21472147 msleep(50000);21482148- else21482148+ } else {21492149+ /* disable mmio access while doing mode 1 reset*/21502150+ smu->adev->no_hw_access = true;21512151+ /* ensure no_hw_access is globally visible before any MMIO */21522152+ smp_mb();21492153 msleep(1000);21542154+ }21502155 }2151215621522157 return ret;
+99-23
drivers/gpu/drm/drm_atomic_helper.c
···11621162 new_state->self_refresh_active;11631163}1164116411651165-static void11661166-encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state)11651165+/**11661166+ * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder11671167+ * @dev: DRM device11681168+ * @state: the driver state object11691169+ *11701170+ * Loops over all connectors in the current state and if the CRTC needs11711171+ * it, disables the bridge chain all the way, then disables the encoder11721172+ * afterwards.11731173+ */11741174+void11751175+drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev,11761176+ struct drm_atomic_state *state)11671177{11681178 struct drm_connector *connector;11691179 struct drm_connector_state *old_conn_state, *new_conn_state;···12391229 }12401230 }12411231}12321232+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable);1242123312431243-static void12441244-crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)12341234+/**12351235+ * drm_atomic_helper_commit_crtc_disable - disable CRTSs12361236+ * @dev: DRM device12371237+ * @state: the driver state object12381238+ *12391239+ * Loops over all CRTCs in the current state and if the CRTC needs12401240+ * it, disables it.12411241+ */12421242+void12431243+drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)12451244{12461245 struct drm_crtc *crtc;12471246 struct drm_crtc_state *old_crtc_state, *new_crtc_state;···13011282 drm_crtc_vblank_put(crtc);13021283 }13031284}12851285+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable);1304128613051305-static void13061306-encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)12871287+/**12881288+ * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges12891289+ * @dev: DRM device12901290+ * @state: the driver state object12911291+ *12921292+ * Loops over all connectors in the current state and if the CRTC needs12931293+ * it, post-disables all encoder bridges.12941294+ */12951295+void12961296+drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)13071297{13081298 struct drm_connector *connector;13091299 struct drm_connector_state *old_conn_state, *new_conn_state;···13631335 drm_bridge_put(bridge);13641336 }13651337}13381338+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable);1366133913671340static void13681341disable_outputs(struct drm_device *dev, struct drm_atomic_state *state)13691342{13701370- encoder_bridge_disable(dev, state);13431343+ drm_atomic_helper_commit_encoder_bridge_disable(dev, state);1371134413721372- crtc_disable(dev, state);13451345+ drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state);1373134613741374- encoder_bridge_post_disable(dev, state);13471347+ drm_atomic_helper_commit_crtc_disable(dev, state);13751348}1376134913771350/**···14751446}14761447EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants);1477144814781478-static void14791479-crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)14491449+/**14501450+ * drm_atomic_helper_commit_crtc_set_mode - set the new mode14511451+ * @dev: DRM device14521452+ * @state: the driver state object14531453+ *14541454+ * Loops over all connectors in the current state and if the mode has14551455+ * changed, change the mode of the CRTC, then call down the bridge14561456+ * chain and change the mode in all bridges as well.14571457+ */14581458+void14591459+drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)14801460{14811461 struct drm_crtc *crtc;14821462 struct drm_crtc_state *new_crtc_state;···15461508 drm_bridge_put(bridge);15471509 }15481510}15111511+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode);1549151215501513/**15511514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs···15701531 drm_atomic_helper_update_legacy_modeset_state(dev, state);15711532 drm_atomic_helper_calc_timestamping_constants(state);1572153315731573- crtc_set_mode(dev, state);15341534+ drm_atomic_helper_commit_crtc_set_mode(dev, state);15741535}15751536EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables);1576153715771577-static void drm_atomic_helper_commit_writebacks(struct drm_device *dev,15781578- struct drm_atomic_state *state)15381538+/**15391539+ * drm_atomic_helper_commit_writebacks - issue writebacks15401540+ * @dev: DRM device15411541+ * @state: atomic state object being committed15421542+ *15431543+ * This loops over the connectors, checks if the new state requires15441544+ * a writeback job to be issued and in that case issues an atomic15451545+ * commit on each connector.15461546+ */15471547+void drm_atomic_helper_commit_writebacks(struct drm_device *dev,15481548+ struct drm_atomic_state *state)15791549{15801550 struct drm_connector *connector;15811551 struct drm_connector_state *new_conn_state;···16031555 }16041556 }16051557}15581558+EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks);1606155916071607-static void16081608-encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)15601560+/**15611561+ * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges15621562+ * @dev: DRM device15631563+ * @state: atomic state object being committed15641564+ *15651565+ * This loops over the connectors and if the CRTC needs it, pre-enables15661566+ * the entire bridge chain.15671567+ */15681568+void15691569+drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)16091570{16101571 struct drm_connector *connector;16111572 struct drm_connector_state *new_conn_state;···16451588 drm_bridge_put(bridge);16461589 }16471590}15911591+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable);1648159216491649-static void16501650-crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)15931593+/**15941594+ * drm_atomic_helper_commit_crtc_enable - enables the CRTCs15951595+ * @dev: DRM device15961596+ * @state: atomic state object being committed15971597+ *15981598+ * This loops over CRTCs in the new state, and of the CRTC needs15991599+ * it, enables it.16001600+ */16011601+void16021602+drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)16511603{16521604 struct drm_crtc *crtc;16531605 struct drm_crtc_state *old_crtc_state;···16851619 }16861620 }16871621}16221622+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable);1688162316891689-static void16901690-encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)16241624+/**16251625+ * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges16261626+ * @dev: DRM device16271627+ * @state: atomic state object being committed16281628+ *16291629+ * This loops over all connectors in the new state, and of the CRTC needs16301630+ * it, enables the entire bridge chain.16311631+ */16321632+void16331633+drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)16911634{16921635 struct drm_connector *connector;16931636 struct drm_connector_state *new_conn_state;···17391664 drm_bridge_put(bridge);17401665 }17411666}16671667+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable);1742166817431669/**17441670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs···17581682void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,17591683 struct drm_atomic_state *state)17601684{17611761- encoder_bridge_pre_enable(dev, state);16851685+ drm_atomic_helper_commit_crtc_enable(dev, state);1762168617631763- crtc_enable(dev, state);16871687+ drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state);1764168817651765- encoder_bridge_enable(dev, state);16891689+ drm_atomic_helper_commit_encoder_bridge_enable(dev, state);1766169017671691 drm_atomic_helper_commit_writebacks(dev, state);17681692}
+10
drivers/gpu/drm/drm_fb_helper.c
···366366{367367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work);368368369369+ if (helper->info->state != FBINFO_STATE_RUNNING)370370+ return;371371+369372 drm_fb_helper_fb_dirty(helper);370373}371374···734731 if (suspend) {735732 if (fb_helper->info->state != FBINFO_STATE_RUNNING)736733 return;734734+735735+ /*736736+ * Cancel pending damage work. During GPU reset, VBlank737737+ * interrupts are disabled and drm_fb_helper_fb_dirty()738738+ * would wait for VBlank timeout otherwise.739739+ */740740+ cancel_work_sync(&fb_helper->damage_work);737741738742 console_lock();739743
···10021002 return PTR_ERR(dsi->next_bridge);10031003 }1004100410051005- /*10061006- * set flag to request the DSI host bridge be pre-enabled before device bridge10071007- * in the chain, so the DSI host is ready when the device bridge is pre-enabled10081008- */10091009- dsi->next_bridge->pre_enable_prev_first = true;10101010-10111005 drm_bridge_add(&dsi->bridge);1012100610131007 ret = component_add(host->dev, &mtk_dsi_component_ops);
···26262727 tidss_runtime_get(tidss);28282929- drm_atomic_helper_commit_modeset_disables(ddev, old_state);3030- drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY);3131- drm_atomic_helper_commit_modeset_enables(ddev, old_state);2929+ /*3030+ * TI's OLDI and DSI encoders need to be set up before the crtc is3131+ * enabled. Thus drm_atomic_helper_commit_modeset_enables() and3232+ * drm_atomic_helper_commit_modeset_disables() cannot be used here, as3333+ * they enable the crtc before bridges' pre-enable, and disable the crtc3434+ * after bridges' post-disable.3535+ *3636+ * Open code the functions here and first call the bridges' pre-enables,3737+ * then crtc enable, then bridges' post-enable (and vice versa for3838+ * disable).3939+ */4040+4141+ drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state);4242+ drm_atomic_helper_commit_crtc_disable(ddev, old_state);4343+ drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state);4444+4545+ drm_atomic_helper_update_legacy_modeset_state(ddev, old_state);4646+ drm_atomic_helper_calc_timestamping_constants(old_state);4747+ drm_atomic_helper_commit_crtc_set_mode(ddev, old_state);4848+4949+ drm_atomic_helper_commit_planes(ddev, old_state,5050+ DRM_PLANE_COMMIT_ACTIVE_ONLY);5151+5252+ drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state);5353+ drm_atomic_helper_commit_crtc_enable(ddev, old_state);5454+ drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state);5555+ drm_atomic_helper_commit_writebacks(ddev, old_state);32563357 drm_atomic_helper_commit_hw_done(old_state);3458 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1-1
drivers/gpu/nova-core/Kconfig
···33 depends on 64BIT44 depends on PCI55 depends on RUST66- depends on RUST_FW_LOADER_ABSTRACTIONS66+ select RUST_FW_LOADER_ABSTRACTIONS77 select AUXILIARY_BUS88 default n99 help
+8-6
drivers/gpu/nova-core/gsp/cmdq.rs
···588588 header.length(),589589 );590590591591+ let payload_length = header.payload_length();592592+591593 // Check that the driver read area is large enough for the message.592592- if slice_1.len() + slice_2.len() < header.length() {594594+ if slice_1.len() + slice_2.len() < payload_length {593595 return Err(EIO);594596 }595597596598 // Cut the message slices down to the actual length of the message.597597- let (slice_1, slice_2) = if slice_1.len() > header.length() {598598- // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`.599599- (slice_1.split_at(header.length()).0, &slice_2[0..0])599599+ let (slice_1, slice_2) = if slice_1.len() > payload_length {600600+ // PANIC: we checked above that `slice_1` is at least as long as `payload_length`.601601+ (slice_1.split_at(payload_length).0, &slice_2[0..0])600602 } else {601603 (602604 slice_1,603605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as604604- // large as `msg_header.length()`.605605- slice_2.split_at(header.length() - slice_1.len()).0,606606+ // large as `payload_length`.607607+ slice_2.split_at(payload_length - slice_1.len()).0,606608 )607609 };608610
+38-40
drivers/gpu/nova-core/gsp/fw.rs
···141141// are valid.142142unsafe impl FromBytes for GspFwWprMeta {}143143144144-type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1;145145-type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;144144+type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1;145145+type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;146146147147impl GspFwWprMeta {148148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the···150150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self {151151 Self(bindings::GspFwWprMeta {152152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified.153153- magic: r570_144::GSP_FW_WPR_META_MAGIC as u64,154154- revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION),153153+ magic: bindings::GSP_FW_WPR_META_MAGIC as u64,154154+ revision: u64::from(bindings::GSP_FW_WPR_META_REVISION),155155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(),156156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size),157157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(),···315315#[repr(u32)]316316pub(crate) enum SeqBufOpcode {317317 // Core operation opcodes318318- CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,319319- CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,320320- CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,321321- CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,318318+ CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,319319+ CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,320320+ CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,321321+ CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,322322323323 // Delay opcode324324- DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,324324+ DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,325325326326 // Register operation opcodes327327- RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,328328- RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,329329- RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,330330- RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,327327+ RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,328328+ RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,329329+ RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,330330+ RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,331331}332332333333impl fmt::Display for SeqBufOpcode {···351351352352 fn try_from(value: u32) -> Result<SeqBufOpcode> {353353 match value {354354- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {354354+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {355355 Ok(SeqBufOpcode::CoreReset)356356 }357357- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {357357+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {358358 Ok(SeqBufOpcode::CoreResume)359359 }360360- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {360360+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {361361 Ok(SeqBufOpcode::CoreStart)362362 }363363- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {363363+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {364364 Ok(SeqBufOpcode::CoreWaitForHalt)365365 }366366- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),367367- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {366366+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),367367+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {368368 Ok(SeqBufOpcode::RegModify)369369 }370370- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),371371- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),372372- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),370370+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),371371+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),372372+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),373373 _ => Err(EINVAL),374374 }375375 }···385385/// Wrapper for GSP sequencer register write payload.386386#[repr(transparent)]387387#[derive(Copy, Clone)]388388-pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);388388+pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);389389390390impl RegWritePayload {391391 /// Returns the register address.···408408/// Wrapper for GSP sequencer register modify payload.409409#[repr(transparent)]410410#[derive(Copy, Clone)]411411-pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);411411+pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);412412413413impl RegModifyPayload {414414 /// Returns the register address.···436436/// Wrapper for GSP sequencer register poll payload.437437#[repr(transparent)]438438#[derive(Copy, Clone)]439439-pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL);439439+pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL);440440441441impl RegPollPayload {442442 /// Returns the register address.···469469/// Wrapper for GSP sequencer delay payload.470470#[repr(transparent)]471471#[derive(Copy, Clone)]472472-pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US);472472+pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US);473473474474impl DelayUsPayload {475475 /// Returns the delay value in microseconds.···487487/// Wrapper for GSP sequencer register store payload.488488#[repr(transparent)]489489#[derive(Copy, Clone)]490490-pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE);490490+pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE);491491492492impl RegStorePayload {493493 /// Returns the register address.···510510511511/// Wrapper for GSP sequencer buffer command.512512#[repr(transparent)]513513-pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD);513513+pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD);514514515515impl SequencerBufferCmd {516516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid.···612612613613/// Wrapper for GSP run CPU sequencer RPC.614614#[repr(transparent)]615615-pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00);615615+pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00);616616617617impl RunCpuSequencer {618618 /// Returns the command index.···797797 }798798}799799800800-// SAFETY: We can't derive the Zeroable trait for this binding because the801801-// procedural macro doesn't support the syntax used by bindgen to create the802802-// __IncompleteArrayField types. So instead we implement it here, which is safe803803-// because these are explicitly padded structures only containing types for804804-// which any bit pattern, including all zeros, is valid.805805-unsafe impl Zeroable for bindings::rpc_message_header_v {}806806-807800/// GSP Message Element.808801///809802/// This is essentially a message header expected to be followed by the message data.···846853 self.inner.checkSum = checksum;847854 }848855849849- /// Returns the total length of the message.856856+ /// Returns the length of the message's payload.857857+ pub(crate) fn payload_length(&self) -> usize {858858+ // `rpc.length` includes the length of the RPC message header.859859+ num::u32_as_usize(self.inner.rpc.length)860860+ .saturating_sub(size_of::<bindings::rpc_message_header_v>())861861+ }862862+863863+ /// Returns the total length of the message, message and RPC headers included.850864 pub(crate) fn length(&self) -> usize {851851- // `rpc.length` includes the length of the GspRpcHeader but not the message header.852852- size_of::<Self>() - size_of::<bindings::rpc_message_header_v>()853853- + num::u32_as_usize(self.inner.rpc.length)865865+ size_of::<Self>() + self.payload_length()854866 }855867856868 // Returns the sequence number of the message.
+7-4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
···2424 unreachable_pub,2525 unsafe_op_in_unsafe_fn2626)]2727-use kernel::{2828- ffi,2929- prelude::Zeroable, //3030-};2727+use kernel::ffi;2828+use pin_init::MaybeZeroable;2929+3130include!("r570_144/bindings.rs");3131+3232+// SAFETY: This type has a size of zero, so its inclusion into another type should not affect their3333+// ability to implement `Zeroable`.3434+unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
···286286 * In addition to report data device will supply data length287287 * in the first 2 bytes of the response, so adjust .288288 */289289+ recv_len = min(recv_len, ihid->bufsize - sizeof(__le16));289290 error = i2c_hid_xfer(ihid, ihid->cmdbuf, length,290291 ihid->rawbuf, recv_len + sizeof(__le16));291292 if (error) {
···77config INTEL_THC_HID88 tristate "Intel Touch Host Controller"99 depends on ACPI1010+ select SGL_ALLOC1011 help1112 THC (Touch Host Controller) is the name of the IP block in PCH that1213 interfaces with Touch Devices (ex: touchscreen, touchpad etc.). It
···15931593 if (!max_rx_size)15941594 return -EOPNOTSUPP;1595159515961596- ret = regmap_read(dev->thc_regmap, THC_M_PRT_SW_SEQ_STS_OFFSET, &val);15961596+ ret = regmap_read(dev->thc_regmap, THC_M_PRT_SPI_ICRRD_OPCODE_OFFSET, &val);15971597 if (ret)15981598 return ret;15991599···16621662 if (!delay_us)16631663 return -EOPNOTSUPP;1664166416651665- ret = regmap_read(dev->thc_regmap, THC_M_PRT_SW_SEQ_STS_OFFSET, &val);16651665+ ret = regmap_read(dev->thc_regmap, THC_M_PRT_SPI_ICRRD_OPCODE_OFFSET, &val);16661666 if (ret)16671667 return ret;16681668
···9191 * @dir: Direction of DMA for this config9292 * @prd_tbls: PRD tables for current DMA9393 * @sgls: Array of pointers to scatter-gather lists9494+ * @sgls_nent_pages: Number of pages per scatter-gather list9495 * @sgls_nent: Actual number of entries per scatter-gather list9596 * @prd_tbl_num: Actual number of PRD tables9697 * @max_packet_size: Size of the buffer needed for 1 DMA message (1 PRD table)···108107109108 struct thc_prd_table *prd_tbls;110109 struct scatterlist *sgls[PRD_TABLES_NUM];110110+ u8 sgls_nent_pages[PRD_TABLES_NUM];111111 u8 sgls_nent[PRD_TABLES_NUM];112112 u8 prd_tbl_num;113113
+16-1
drivers/hid/usbhid/hid-core.c
···985985 struct usb_device *dev = interface_to_usbdev (intf);986986 struct hid_descriptor *hdesc;987987 struct hid_class_descriptor *hcdesc;988988+ __u8 fixed_opt_descriptors_size;988989 u32 quirks = 0;989990 unsigned int rsize = 0;990991 char *rdesc;···10161015 (hdesc->bNumDescriptors - 1) * sizeof(*hcdesc)) {10171016 dbg_hid("hid descriptor invalid, bLen=%hhu bNum=%hhu\n",10181017 hdesc->bLength, hdesc->bNumDescriptors);10191019- return -EINVAL;10181018+10191019+ /*10201020+ * Some devices may expose a wrong number of descriptors compared10211021+ * to the provided length.10221022+ * However, we ignore the optional hid class descriptors entirely10231023+ * so we can safely recompute the proper field.10241024+ */10251025+ if (hdesc->bLength >= sizeof(*hdesc)) {10261026+ fixed_opt_descriptors_size = hdesc->bLength - sizeof(*hdesc);10271027+10281028+ hid_warn(intf, "fixing wrong optional hid class descriptors count\n");10291029+ hdesc->bNumDescriptors = fixed_opt_descriptors_size / sizeof(*hcdesc) + 1;10301030+ } else {10311031+ return -EINVAL;10321032+ }10201033 }1021103410221035 hid->version = le16_to_cpu(hdesc->bcdHID);
···4141 depends on DEBUG_KERNEL4242 depends on FAULT_INJECTION4343 depends on RUNTIME_TESTING_MENU4444- depends on IOMMU_PT_AMDV14444+ depends on IOMMU_PT_AMDV1=y || IOMMUFD=IOMMU_PT_AMDV14545+ select DMA_SHARED_BUFFER4546 select IOMMUFD_DRIVER4647 default n4748 help
+1-1
drivers/irqchip/irq-gic-v5-its.c
···849849850850 itte = gicv5_its_device_get_itte_ref(its_dev, event_id);851851852852- if (FIELD_GET(GICV5_ITTL2E_VALID, *itte))852852+ if (FIELD_GET(GICV5_ITTL2E_VALID, le64_to_cpu(*itte)))853853 return -EEXIST;854854855855 itt_entry = FIELD_PREP(GICV5_ITTL2E_LPI_ID, lpi) |
+8-2
drivers/irqchip/irq-riscv-imsic-state.c
···477477 lpriv = per_cpu_ptr(imsic->lpriv, cpu);478478479479 bitmap_free(lpriv->dirty_bitmap);480480+ kfree(lpriv->vectors);480481 }481482482483 free_percpu(imsic->lpriv);···491490 int cpu, i;492491493492 /* Allocate per-CPU private state */494494- imsic->lpriv = __alloc_percpu(struct_size(imsic->lpriv, vectors, global->nr_ids + 1),495495- __alignof__(*imsic->lpriv));493493+ imsic->lpriv = alloc_percpu(typeof(*imsic->lpriv));496494 if (!imsic->lpriv)497495 return -ENOMEM;498496···510510 /* Setup lazy timer for synchronization */511511 timer_setup(&lpriv->timer, imsic_local_timer_callback, TIMER_PINNED);512512#endif513513+514514+ /* Allocate vector array */515515+ lpriv->vectors = kcalloc(global->nr_ids + 1, sizeof(*lpriv->vectors),516516+ GFP_KERNEL);517517+ if (!lpriv->vectors)518518+ goto fail_local_cleanup;513519514520 /* Setup vector array */515521 for (i = 0; i <= global->nr_ids; i++) {
···122122123123#define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */124124125125+#define MEI_DEV_ID_NVL_S 0x6E68 /* Nova Lake Point S */126126+125127/*126128 * MEI HW Section127129 */
···5566config MISC_RP177 tristate "RaspberryPi RP1 misc device"88- depends on OF_IRQ && OF_OVERLAY && PCI_MSI && PCI_QUIRKS99- select PCI_DYNAMIC_OF_NODES88+ depends on OF_IRQ && PCI_MSI109 help1110 Support the RP1 peripheral chip found on Raspberry Pi 5 board.1211···14151516 The driver is responsible for enabling the DT node once the PCIe1617 endpoint has been configured, and handling interrupts.1717-1818- This driver uses an overlay to load other drivers to support for1919- RP1 internal sub-devices.
···11-// SPDX-License-Identifier: (GPL-2.0 OR MIT)22-33-/*44- * The dts overlay is included from the dts directory so55- * it can be possible to check it with CHECK_DTBS while66- * also compile it from the driver source directory.77- */88-99-/dts-v1/;1010-/plugin/;1111-1212-/ {1313- fragment@0 {1414- target-path="";1515- __overlay__ {1616- compatible = "pci1de4,1";1717- #address-cells = <3>;1818- #size-cells = <2>;1919- interrupt-controller;2020- #interrupt-cells = <2>;2121-2222- #include "arm64/broadcom/rp1-common.dtsi"2323- };2424- };2525-};
+4-33
drivers/misc/rp1/rp1_pci.c
···3434/* Interrupts */3535#define RP1_INT_END 6136363737-/* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */3838-extern char __dtbo_rp1_pci_begin[];3939-extern char __dtbo_rp1_pci_end[];4040-4137struct rp1_dev {4238 struct pci_dev *pdev;4339 struct irq_domain *domain;4440 struct irq_data *pcie_irqds[64];4541 void __iomem *bar1;4646- int ovcs_id; /* overlay changeset id */4742 bool level_triggered_irq[RP1_INT_END];4843};4944···179184180185static int rp1_probe(struct pci_dev *pdev, const struct pci_device_id *id)181186{182182- u32 dtbo_size = __dtbo_rp1_pci_end - __dtbo_rp1_pci_begin;183183- void *dtbo_start = __dtbo_rp1_pci_begin;184187 struct device *dev = &pdev->dev;185188 struct device_node *rp1_node;186186- bool skip_ovl = true;187189 struct rp1_dev *rp1;188190 int err = 0;189191 int i;190192191191- /*192192- * Either use rp1_nexus node if already present in DT, or193193- * set a flag to load it from overlay at runtime194194- */195195- rp1_node = of_find_node_by_name(NULL, "rp1_nexus");196196- if (!rp1_node) {197197- rp1_node = dev_of_node(dev);198198- skip_ovl = false;199199- }193193+ rp1_node = dev_of_node(dev);200194201195 if (!rp1_node) {202196 dev_err(dev, "Missing of_node for device\n");···260276 rp1_chained_handle_irq, rp1);261277 }262278263263- if (!skip_ovl) {264264- err = of_overlay_fdt_apply(dtbo_start, dtbo_size, &rp1->ovcs_id,265265- rp1_node);266266- if (err)267267- goto err_unregister_interrupts;268268- }269269-270279 err = of_platform_default_populate(rp1_node, NULL, dev);271280 if (err) {272281 dev_err_probe(&pdev->dev, err, "Error populating devicetree\n");273273- goto err_unload_overlay;282282+ goto err_unregister_interrupts;274283 }275284276276- if (skip_ovl)277277- of_node_put(rp1_node);285285+ of_node_put(rp1_node);278286279287 return 0;280288281281-err_unload_overlay:282282- of_overlay_remove(&rp1->ovcs_id);283289err_unregister_interrupts:284290 rp1_unregister_interrupts(pdev);285291err_put_node:286286- if (skip_ovl)287287- of_node_put(rp1_node);292292+ of_node_put(rp1_node);288293289294 return err;290295}291296292297static void rp1_remove(struct pci_dev *pdev)293298{294294- struct rp1_dev *rp1 = pci_get_drvdata(pdev);295299 struct device *dev = &pdev->dev;296300297301 of_platform_depopulate(dev);298298- of_overlay_remove(&rp1->ovcs_id);299302 rp1_unregister_interrupts(pdev);300303}301304
+1-1
drivers/mtd/nand/ecc-sw-hamming.c
···88 *99 * Completely replaces the previous ECC implementation which was written by:1010 * Steven J. Hill (sjhill@realitydiluted.com)1111- * Thomas Gleixner (tglx@linutronix.de)1111+ * Thomas Gleixner (tglx@kernel.org)1212 *1313 * Information on how this algorithm works and how it was developed1414 * can be found in Documentation/driver-api/mtd/nand_ecc.rst
+1-1
drivers/mtd/nand/raw/diskonchip.c
···1111 * Error correction code lifted from the old docecc code1212 * Author: Fabrice Bellard (fabrice.bellard@netgem.com)1313 * Copyright (C) 2000 Netgem S.A.1414- * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@linutronix.de>1414+ * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@kernel.org>1515 *1616 * Interface to generic NAND code for M-Systems DiskOnChip devices1717 */
+2-2
drivers/mtd/nand/raw/nand_base.c
···88 * http://www.linux-mtd.infradead.org/doc/nand.html99 *1010 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)1111- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)1111+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)1212 *1313 * Credits:1414 * David Woodhouse for adding multichip support···6594659465956595MODULE_LICENSE("GPL");65966596MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");65976597-MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");65976597+MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>");65986598MODULE_DESCRIPTION("Generic NAND flash driver code");
···11// SPDX-License-Identifier: GPL-2.0-only22/*33- * Copyright (C) 2002 Thomas Gleixner (tglx@linutronix.de)33+ * Copyright (C) 2002 Thomas Gleixner (tglx@kernel.org)44 */5566#include <linux/sizes.h>
+1-1
drivers/mtd/nand/raw/nand_jedec.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/nand_legacy.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/nand_onfi.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/ndfc.c
···272272module_platform_driver(ndfc_driver);273273274274MODULE_LICENSE("GPL");275275-MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");275275+MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>");276276MODULE_DESCRIPTION("OF Platform driver for NDFC");
-23
drivers/net/dsa/mv88e6xxx/chip.c
···3364336433653365static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)33663366{33673367- struct device_node *phy_handle = NULL;33683367 struct fwnode_handle *ports_fwnode;33693368 struct fwnode_handle *port_fwnode;33703369 struct dsa_switch *ds = chip->ds;33713370 struct mv88e6xxx_port *p;33723372- struct dsa_port *dp;33733373- int tx_amp;33743371 int err;33753372 u16 reg;33763373 u32 val;···35773580 err = chip->info->ops->port_setup_message_port(chip, port);35783581 if (err)35793582 return err;35803580- }35813581-35823582- if (chip->info->ops->serdes_set_tx_amplitude) {35833583- dp = dsa_to_port(ds, port);35843584- if (dp)35853585- phy_handle = of_parse_phandle(dp->dn, "phy-handle", 0);35863586-35873587- if (phy_handle && !of_property_read_u32(phy_handle,35883588- "tx-p2p-microvolt",35893589- &tx_amp))35903590- err = chip->info->ops->serdes_set_tx_amplitude(chip,35913591- port, tx_amp);35923592- if (phy_handle) {35933593- of_node_put(phy_handle);35943594- if (err)35953595- return err;35963596- }35973583 }3598358435993585 /* Port based VLAN map: give each port the same default address···47484768 .serdes_irq_mapping = mv88e6352_serdes_irq_mapping,47494769 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len,47504770 .serdes_get_regs = mv88e6352_serdes_get_regs,47514751- .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude,47524771 .gpio_ops = &mv88e6352_gpio_ops,47534772 .phylink_get_caps = mv88e6352_phylink_get_caps,47544773 .pcs_ops = &mv88e6352_pcs_ops,···50235044 .serdes_irq_mapping = mv88e6352_serdes_irq_mapping,50245045 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len,50255046 .serdes_get_regs = mv88e6352_serdes_get_regs,50265026- .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude,50275047 .gpio_ops = &mv88e6352_gpio_ops,50285048 .avb_ops = &mv88e6352_avb_ops,50295049 .ptp_ops = &mv88e6352_ptp_ops,···54595481 .serdes_get_stats = mv88e6352_serdes_get_stats,54605482 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len,54615483 .serdes_get_regs = mv88e6352_serdes_get_regs,54625462- .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude,54635484 .phylink_get_caps = mv88e6352_phylink_get_caps,54645485 .pcs_ops = &mv88e6352_pcs_ops,54655486};
-4
drivers/net/dsa/mv88e6xxx/chip.h
···642642 void (*serdes_get_regs)(struct mv88e6xxx_chip *chip, int port,643643 void *_p);644644645645- /* SERDES SGMII/Fiber Output Amplitude */646646- int (*serdes_set_tx_amplitude)(struct mv88e6xxx_chip *chip, int port,647647- int val);648648-649645 /* Address Translation Unit operations */650646 int (*atu_get_hash)(struct mv88e6xxx_chip *chip, u8 *hash);651647 int (*atu_set_hash)(struct mv88e6xxx_chip *chip, u8 hash);
-46
drivers/net/dsa/mv88e6xxx/serdes.c
···2525 reg, val);2626}27272828-static int mv88e6352_serdes_write(struct mv88e6xxx_chip *chip, int reg,2929- u16 val)3030-{3131- return mv88e6xxx_phy_page_write(chip, MV88E6352_ADDR_SERDES,3232- MV88E6352_SERDES_PAGE_FIBER,3333- reg, val);3434-}3535-3628static int mv88e6390_serdes_read(struct mv88e6xxx_chip *chip,3729 int lane, int device, int reg, u16 *val)3830{···497505 if (!err)498506 p[i] = reg;499507 }500500-}501501-502502-static const int mv88e6352_serdes_p2p_to_reg[] = {503503- /* Index of value in microvolts corresponds to the register value */504504- 14000, 112000, 210000, 308000, 406000, 504000, 602000, 700000,505505-};506506-507507-int mv88e6352_serdes_set_tx_amplitude(struct mv88e6xxx_chip *chip, int port,508508- int val)509509-{510510- bool found = false;511511- u16 ctrl, reg;512512- int err;513513- int i;514514-515515- err = mv88e6352_g2_scratch_port_has_serdes(chip, port);516516- if (err <= 0)517517- return err;518518-519519- for (i = 0; i < ARRAY_SIZE(mv88e6352_serdes_p2p_to_reg); ++i) {520520- if (mv88e6352_serdes_p2p_to_reg[i] == val) {521521- reg = i;522522- found = true;523523- break;524524- }525525- }526526-527527- if (!found)528528- return -EINVAL;529529-530530- err = mv88e6352_serdes_read(chip, MV88E6352_SERDES_SPEC_CTRL2, &ctrl);531531- if (err)532532- return err;533533-534534- ctrl &= ~MV88E6352_SERDES_OUT_AMP_MASK;535535- ctrl |= reg;536536-537537- return mv88e6352_serdes_write(chip, MV88E6352_SERDES_SPEC_CTRL2, ctrl);538508}
-5
drivers/net/dsa/mv88e6xxx/serdes.h
···2929#define MV88E6352_SERDES_INT_FIBRE_ENERGY BIT(4)3030#define MV88E6352_SERDES_INT_STATUS 0x1331313232-#define MV88E6352_SERDES_SPEC_CTRL2 0x1a3333-#define MV88E6352_SERDES_OUT_AMP_MASK 0x000734323533#define MV88E6341_PORT5_LANE 0x153634···137139void mv88e6352_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p);138140int mv88e6390_serdes_get_regs_len(struct mv88e6xxx_chip *chip, int port);139141void mv88e6390_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p);140140-141141-int mv88e6352_serdes_set_tx_amplitude(struct mv88e6xxx_chip *chip, int port,142142- int val);143142144143/* Return the (first) SERDES lane address a port is using, -errno otherwise. */145144static inline int mv88e6xxx_serdes_get_lane(struct mv88e6xxx_chip *chip,
···259259 depends on PCI260260 select NET_DEVLINK261261 select PAGE_POOL262262+ select AUXILIARY_BUS262263 help263264 This driver supports Broadcom ThorUltra 50/100/200/400/800 gigabit264265 Ethernet cards. The module will be called bng_en. To compile this
···284284285285struct idpf_fsteer_fltr {286286 struct list_head list;287287- u32 loc;288288- u32 q_index;287287+ struct ethtool_rx_flow_spec fs;289288};290289291290/**···423424 * @rss_key: RSS hash key424425 * @rss_lut_size: Size of RSS lookup table425426 * @rss_lut: RSS lookup table426426- * @cached_lut: Used to restore previously init RSS lut427427 */428428struct idpf_rss_data {429429 u16 rss_key_size;430430 u8 *rss_key;431431 u16 rss_lut_size;432432 u32 *rss_lut;433433- u32 *cached_lut;434433};435434436435/**···555558 * @max_q: Maximum possible queues556559 * @req_qs_chunks: Queue chunk data for requested queues557560 * @mac_filter_list_lock: Lock to protect mac filters561561+ * @flow_steer_list_lock: Lock to protect fsteer filters558562 * @flags: See enum idpf_vport_config_flags559563 */560564struct idpf_vport_config {···563565 struct idpf_vport_max_q max_q;564566 struct virtchnl2_add_queues *req_qs_chunks;565567 spinlock_t mac_filter_list_lock;568568+ spinlock_t flow_steer_list_lock;566569 DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS);567570};568571
+63-29
drivers/net/ethernet/intel/idpf/idpf_ethtool.c
···3737{3838 struct idpf_netdev_priv *np = netdev_priv(netdev);3939 struct idpf_vport_user_config_data *user_config;4040+ struct idpf_vport_config *vport_config;4041 struct idpf_fsteer_fltr *f;4142 struct idpf_vport *vport;4243 unsigned int cnt = 0;···45444645 idpf_vport_ctrl_lock(netdev);4746 vport = idpf_netdev_to_vport(netdev);4848- user_config = &np->adapter->vport_config[np->vport_idx]->user_config;4747+ vport_config = np->adapter->vport_config[np->vport_idx];4848+ user_config = &vport_config->user_config;49495050 switch (cmd->cmd) {5151 case ETHTOOL_GRXCLSRLCNT:···5452 cmd->data = idpf_fsteer_max_rules(vport);5553 break;5654 case ETHTOOL_GRXCLSRULE:5757- err = -EINVAL;5555+ err = -ENOENT;5656+ spin_lock_bh(&vport_config->flow_steer_list_lock);5857 list_for_each_entry(f, &user_config->flow_steer_list, list)5959- if (f->loc == cmd->fs.location) {6060- cmd->fs.ring_cookie = f->q_index;5858+ if (f->fs.location == cmd->fs.location) {5959+ /* Avoid infoleak from padding: zero first,6060+ * then assign fields6161+ */6262+ memset(&cmd->fs, 0, sizeof(cmd->fs));6363+ cmd->fs = f->fs;6164 err = 0;6265 break;6366 }6767+ spin_unlock_bh(&vport_config->flow_steer_list_lock);6468 break;6569 case ETHTOOL_GRXCLSRLALL:6670 cmd->data = idpf_fsteer_max_rules(vport);7171+ spin_lock_bh(&vport_config->flow_steer_list_lock);6772 list_for_each_entry(f, &user_config->flow_steer_list, list) {6873 if (cnt == cmd->rule_cnt) {6974 err = -EMSGSIZE;7075 break;7176 }7272- rule_locs[cnt] = f->loc;7777+ rule_locs[cnt] = f->fs.location;7378 cnt++;7479 }7580 if (!err)7681 cmd->rule_cnt = user_config->num_fsteer_fltrs;8282+ spin_unlock_bh(&vport_config->flow_steer_list_lock);7783 break;7884 default:7985 break;···178168 struct idpf_vport *vport;179169 u32 flow_type, q_index;180170 u16 num_rxq;181181- int err;171171+ int err = 0;182172183173 vport = idpf_netdev_to_vport(netdev);184174 vport_config = vport->adapter->vport_config[np->vport_idx];···203193 rule = kzalloc(struct_size(rule, rule_info, 1), GFP_KERNEL);204194 if (!rule)205195 return -ENOMEM;196196+197197+ fltr = kzalloc(sizeof(*fltr), GFP_KERNEL);198198+ if (!fltr) {199199+ err = -ENOMEM;200200+ goto out_free_rule;201201+ }202202+203203+ /* detect duplicate entry and reject before adding rules */204204+ spin_lock_bh(&vport_config->flow_steer_list_lock);205205+ list_for_each_entry(f, &user_config->flow_steer_list, list) {206206+ if (f->fs.location == fsp->location) {207207+ err = -EEXIST;208208+ break;209209+ }210210+211211+ if (f->fs.location > fsp->location)212212+ break;213213+ parent = f;214214+ }215215+ spin_unlock_bh(&vport_config->flow_steer_list_lock);216216+217217+ if (err)218218+ goto out;206219207220 rule->vport_id = cpu_to_le32(vport->vport_id);208221 rule->count = cpu_to_le32(1);···265232 goto out;266233 }267234268268- fltr = kzalloc(sizeof(*fltr), GFP_KERNEL);269269- if (!fltr) {270270- err = -ENOMEM;271271- goto out;272272- }235235+ /* Save a copy of the user's flow spec so ethtool can later retrieve it */236236+ fltr->fs = *fsp;273237274274- fltr->loc = fsp->location;275275- fltr->q_index = q_index;276276- list_for_each_entry(f, &user_config->flow_steer_list, list) {277277- if (f->loc >= fltr->loc)278278- break;279279- parent = f;280280- }281281-238238+ spin_lock_bh(&vport_config->flow_steer_list_lock);282239 parent ? list_add(&fltr->list, &parent->list) :283240 list_add(&fltr->list, &user_config->flow_steer_list);284241285242 user_config->num_fsteer_fltrs++;243243+ spin_unlock_bh(&vport_config->flow_steer_list_lock);244244+ goto out_free_rule;286245287246out:247247+ kfree(fltr);248248+out_free_rule:288249 kfree(rule);289250 return err;290251}···329302 goto out;330303 }331304305305+ spin_lock_bh(&vport_config->flow_steer_list_lock);332306 list_for_each_entry_safe(f, iter,333307 &user_config->flow_steer_list, list) {334334- if (f->loc == fsp->location) {308308+ if (f->fs.location == fsp->location) {335309 list_del(&f->list);336310 kfree(f);337311 user_config->num_fsteer_fltrs--;338338- goto out;312312+ goto out_unlock;339313 }340314 }341341- err = -EINVAL;315315+ err = -ENOENT;342316317317+out_unlock:318318+ spin_unlock_bh(&vport_config->flow_steer_list_lock);343319out:344320 kfree(rule);345321 return err;···411381 * @netdev: network interface device structure412382 * @rxfh: pointer to param struct (indir, key, hfunc)413383 *414414- * Reads the indirection table directly from the hardware. Always returns 0.384384+ * RSS LUT and Key information are read from driver's cached385385+ * copy. When rxhash is off, rss lut will be displayed as zeros.386386+ *387387+ * Return: 0 on success, -errno otherwise.415388 */416389static int idpf_get_rxfh(struct net_device *netdev,417390 struct ethtool_rxfh_param *rxfh)···422389 struct idpf_netdev_priv *np = netdev_priv(netdev);423390 struct idpf_rss_data *rss_data;424391 struct idpf_adapter *adapter;392392+ struct idpf_vport *vport;393393+ bool rxhash_ena;425394 int err = 0;426395 u16 i;427396428397 idpf_vport_ctrl_lock(netdev);398398+ vport = idpf_netdev_to_vport(netdev);429399430400 adapter = np->adapter;431401···438402 }439403440404 rss_data = &adapter->vport_config[np->vport_idx]->user_config.rss_data;441441- if (!test_bit(IDPF_VPORT_UP, np->state))442442- goto unlock_mutex;443405406406+ rxhash_ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);444407 rxfh->hfunc = ETH_RSS_HASH_TOP;445408446409 if (rxfh->key)···447412448413 if (rxfh->indir) {449414 for (i = 0; i < rss_data->rss_lut_size; i++)450450- rxfh->indir[i] = rss_data->rss_lut[i];415415+ rxfh->indir[i] = rxhash_ena ? rss_data->rss_lut[i] : 0;451416 }452417453418unlock_mutex:···487452 }488453489454 rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;490490- if (!test_bit(IDPF_VPORT_UP, np->state))491491- goto unlock_mutex;492455493456 if (rxfh->hfunc != ETH_RSS_HASH_NO_CHANGE &&494457 rxfh->hfunc != ETH_RSS_HASH_TOP) {···502469 rss_data->rss_lut[lut] = rxfh->indir[lut];503470 }504471505505- err = idpf_config_rss(vport);472472+ if (test_bit(IDPF_VPORT_UP, np->state))473473+ err = idpf_config_rss(vport);506474507475unlock_mutex:508476 idpf_vport_ctrl_unlock(netdev);
+1-1
drivers/net/ethernet/intel/idpf/idpf_idc.c
···322322 for (i = 0; i < adapter->num_alloc_vports; i++) {323323 struct idpf_vport *vport = adapter->vports[i];324324325325- if (!vport)325325+ if (!vport || !vport->vdev_info)326326 continue;327327328328 idpf_unplug_aux_dev(vport->vdev_info->adev);
+154-120
drivers/net/ethernet/intel/idpf/idpf_lib.c
···443443}444444445445/**446446+ * idpf_del_all_flow_steer_filters - Delete all flow steer filters in list447447+ * @vport: main vport struct448448+ *449449+ * Takes flow_steer_list_lock spinlock. Deletes all filters450450+ */451451+static void idpf_del_all_flow_steer_filters(struct idpf_vport *vport)452452+{453453+ struct idpf_vport_config *vport_config;454454+ struct idpf_fsteer_fltr *f, *ftmp;455455+456456+ vport_config = vport->adapter->vport_config[vport->idx];457457+458458+ spin_lock_bh(&vport_config->flow_steer_list_lock);459459+ list_for_each_entry_safe(f, ftmp, &vport_config->user_config.flow_steer_list,460460+ list) {461461+ list_del(&f->list);462462+ kfree(f);463463+ }464464+ vport_config->user_config.num_fsteer_fltrs = 0;465465+ spin_unlock_bh(&vport_config->flow_steer_list_lock);466466+}467467+468468+/**446469 * idpf_find_mac_filter - Search filter list for specific mac filter447470 * @vconfig: Vport config structure448471 * @macaddr: The MAC address···752729 return 0;753730}754731732732+static void idpf_detach_and_close(struct idpf_adapter *adapter)733733+{734734+ int max_vports = adapter->max_vports;735735+736736+ for (int i = 0; i < max_vports; i++) {737737+ struct net_device *netdev = adapter->netdevs[i];738738+739739+ /* If the interface is in detached state, that means the740740+ * previous reset was not handled successfully for this741741+ * vport.742742+ */743743+ if (!netif_device_present(netdev))744744+ continue;745745+746746+ /* Hold RTNL to protect racing with callbacks */747747+ rtnl_lock();748748+ netif_device_detach(netdev);749749+ if (netif_running(netdev)) {750750+ set_bit(IDPF_VPORT_UP_REQUESTED,751751+ adapter->vport_config[i]->flags);752752+ dev_close(netdev);753753+ }754754+ rtnl_unlock();755755+ }756756+}757757+758758+static void idpf_attach_and_open(struct idpf_adapter *adapter)759759+{760760+ int max_vports = adapter->max_vports;761761+762762+ for (int i = 0; i < max_vports; i++) {763763+ struct idpf_vport *vport = adapter->vports[i];764764+ struct idpf_vport_config *vport_config;765765+ struct net_device *netdev;766766+767767+ /* In case of a critical error in the init task, the vport768768+ * will be freed. Only continue to restore the netdevs769769+ * if the vport is allocated.770770+ */771771+ if (!vport)772772+ continue;773773+774774+ /* No need for RTNL on attach as this function is called775775+ * following detach and dev_close(). We do take RTNL for776776+ * dev_open() below as it can race with external callbacks777777+ * following the call to netif_device_attach().778778+ */779779+ netdev = adapter->netdevs[i];780780+ netif_device_attach(netdev);781781+ vport_config = adapter->vport_config[vport->idx];782782+ if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED,783783+ vport_config->flags)) {784784+ rtnl_lock();785785+ dev_open(netdev, NULL);786786+ rtnl_unlock();787787+ }788788+ }789789+}790790+755791/**756792 * idpf_cfg_netdev - Allocate, configure and register a netdev757793 * @vport: main vport structure···1073991 u16 idx = vport->idx;10749921075993 vport_config = adapter->vport_config[vport->idx];10761076- idpf_deinit_rss(vport);994994+ idpf_deinit_rss_lut(vport);1077995 rss_data = &vport_config->user_config.rss_data;1078996 kfree(rss_data->rss_key);1079997 rss_data->rss_key = NULL;···11051023 kfree(adapter->vport_config[idx]->req_qs_chunks);11061024 adapter->vport_config[idx]->req_qs_chunks = NULL;11071025 }10261026+ kfree(vport->rx_ptype_lkup);10271027+ vport->rx_ptype_lkup = NULL;11081028 kfree(vport);11091029 adapter->num_alloc_vports--;11101030}···11251041 idpf_idc_deinit_vport_aux_device(vport->vdev_info);1126104211271043 idpf_deinit_mac_addr(vport);11281128- idpf_vport_stop(vport, true);1129104411301130- if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))10451045+ if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) {10461046+ idpf_vport_stop(vport, true);11311047 idpf_decfg_netdev(vport);11321132- if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))10481048+ }10491049+ if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) {11331050 idpf_del_all_mac_filters(vport);10511051+ idpf_del_all_flow_steer_filters(vport);10521052+ }1134105311351054 if (adapter->netdevs[i]) {11361055 struct idpf_netdev_priv *np = netdev_priv(adapter->netdevs[i]);···12261139 u16 idx = adapter->next_vport;12271140 struct idpf_vport *vport;12281141 u16 num_max_q;11421142+ int err;1229114312301144 if (idx == IDPF_NO_FREE_SLOT)12311145 return NULL;···1277118912781190 idpf_vport_init(vport, max_q);1279119112801280- /* This alloc is done separate from the LUT because it's not strictly12811281- * dependent on how many queues we have. If we change number of queues12821282- * and soft reset we'll need a new LUT but the key can remain the same12831283- * for as long as the vport exists.11921192+ /* LUT and key are both initialized here. Key is not strictly dependent11931193+ * on how many queues we have. If we change number of queues and soft11941194+ * reset is initiated, LUT will be freed and a new LUT will be allocated11951195+ * as per the updated number of queues during vport bringup. However,11961196+ * the key remains the same for as long as the vport exists.12841197 */12851198 rss_data = &adapter->vport_config[idx]->user_config.rss_data;12861199 rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);···1290120112911202 /* Initialize default rss key */12921203 netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);12041204+12051205+ /* Initialize default rss LUT */12061206+ err = idpf_init_rss_lut(vport);12071207+ if (err)12081208+ goto free_rss_key;1293120912941210 /* fill vport slot in the adapter struct */12951211 adapter->vports[idx] = vport;···1306121213071213 return vport;1308121412151215+free_rss_key:12161216+ kfree(rss_data->rss_key);13091217free_vector_idxs:13101218 kfree(vport->q_vector_idxs);13111219free_vport:···14841388{14851389 struct idpf_netdev_priv *np = netdev_priv(vport->netdev);14861390 struct idpf_adapter *adapter = vport->adapter;14871487- struct idpf_vport_config *vport_config;14881391 int err;1489139214901393 if (test_bit(IDPF_VPORT_UP, np->state))···15241429 if (err) {15251430 dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",15261431 vport->vport_id, err);15271527- goto queues_rel;14321432+ goto intr_deinit;15281433 }1529143415301435 err = idpf_rx_bufs_init_all(vport);15311436 if (err) {15321437 dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",15331438 vport->vport_id, err);15341534- goto queues_rel;14391439+ goto intr_deinit;15351440 }1536144115371442 idpf_rx_init_buf_tail(vport);···1577148215781483 idpf_restore_features(vport);1579148415801580- vport_config = adapter->vport_config[vport->idx];15811581- if (vport_config->user_config.rss_data.rss_lut)15821582- err = idpf_config_rss(vport);15831583- else15841584- err = idpf_init_rss(vport);14851485+ err = idpf_config_rss(vport);15851486 if (err) {15861586- dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n",14871487+ dev_err(&adapter->pdev->dev, "Failed to configure RSS for vport %u: %d\n",15871488 vport->vport_id, err);15881489 goto disable_vport;15891490 }···15881497 if (err) {15891498 dev_err(&adapter->pdev->dev, "Failed to complete interface up for vport %u: %d\n",15901499 vport->vport_id, err);15911591- goto deinit_rss;15001500+ goto disable_vport;15921501 }1593150215941503 if (rtnl)···1596150515971506 return 0;1598150715991599-deinit_rss:16001600- idpf_deinit_rss(vport);16011508disable_vport:16021509 idpf_send_disable_vport_msg(vport);16031510disable_queues:···16331544 struct idpf_vport_config *vport_config;16341545 struct idpf_vport_max_q max_q;16351546 struct idpf_adapter *adapter;16361636- struct idpf_netdev_priv *np;16371547 struct idpf_vport *vport;16381548 u16 num_default_vports;16391549 struct pci_dev *pdev;···16671579 goto unwind_vports;16681580 }1669158115821582+ err = idpf_send_get_rx_ptype_msg(vport);15831583+ if (err)15841584+ goto unwind_vports;15851585+16701586 index = vport->idx;16711587 vport_config = adapter->vport_config[index];1672158816731589 spin_lock_init(&vport_config->mac_filter_list_lock);15901590+ spin_lock_init(&vport_config->flow_steer_list_lock);1674159116751592 INIT_LIST_HEAD(&vport_config->user_config.mac_filter_list);16761593 INIT_LIST_HEAD(&vport_config->user_config.flow_steer_list);···16831590 err = idpf_check_supported_desc_ids(vport);16841591 if (err) {16851592 dev_err(&pdev->dev, "failed to get required descriptor ids\n");16861686- goto cfg_netdev_err;15931593+ goto unwind_vports;16871594 }1688159516891596 if (idpf_cfg_netdev(vport))16901690- goto cfg_netdev_err;16911691-16921692- err = idpf_send_get_rx_ptype_msg(vport);16931693- if (err)16941694- goto handle_err;16951695-16961696- /* Once state is put into DOWN, driver is ready for dev_open */16971697- np = netdev_priv(vport->netdev);16981698- clear_bit(IDPF_VPORT_UP, np->state);16991699- if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags))17001700- idpf_vport_open(vport, true);15971597+ goto unwind_vports;1701159817021599 /* Spawn and return 'idpf_init_task' work queue until all the17031600 * default vports are created···17181635 set_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags);17191636 }1720163717211721- /* As all the required vports are created, clear the reset flag17221722- * unconditionally here in case we were in reset and the link was down.17231723- */16381638+ /* Clear the reset and load bits as all vports are created */17241639 clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);16401640+ clear_bit(IDPF_HR_DRV_LOAD, adapter->flags);17251641 /* Start the statistics task now */17261642 queue_delayed_work(adapter->stats_wq, &adapter->stats_task,17271643 msecs_to_jiffies(10 * (pdev->devfn & 0x07)));1728164417291645 return;1730164617311731-handle_err:17321732- idpf_decfg_netdev(vport);17331733-cfg_netdev_err:17341734- idpf_vport_rel(vport);17351735- adapter->vports[index] = NULL;17361647unwind_vports:17371648 if (default_vport) {17381649 for (index = 0; index < adapter->max_vports; index++) {···17341657 idpf_vport_dealloc(adapter->vports[index]);17351658 }17361659 }16601660+ /* Cleanup after vc_core_init, which has no way of knowing the16611661+ * init task failed on driver load.16621662+ */16631663+ if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {16641664+ cancel_delayed_work_sync(&adapter->serv_task);16651665+ cancel_delayed_work_sync(&adapter->mbx_task);16661666+ }16671667+ idpf_ptp_release(adapter);16681668+17371669 clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);17381670}17391671···18731787}1874178818751789/**18761876- * idpf_set_vport_state - Set the vport state to be after the reset18771877- * @adapter: Driver specific private structure18781878- */18791879-static void idpf_set_vport_state(struct idpf_adapter *adapter)18801880-{18811881- u16 i;18821882-18831883- for (i = 0; i < adapter->max_vports; i++) {18841884- struct idpf_netdev_priv *np;18851885-18861886- if (!adapter->netdevs[i])18871887- continue;18881888-18891889- np = netdev_priv(adapter->netdevs[i]);18901890- if (test_bit(IDPF_VPORT_UP, np->state))18911891- set_bit(IDPF_VPORT_UP_REQUESTED,18921892- adapter->vport_config[i]->flags);18931893- }18941894-}18951895-18961896-/**18971790 * idpf_init_hard_reset - Initiate a hardware reset18981791 * @adapter: Driver specific private structure18991792 *···18801815 * reallocate. Also reinitialize the mailbox. Return 0 on success,18811816 * negative on failure.18821817 */18831883-static int idpf_init_hard_reset(struct idpf_adapter *adapter)18181818+static void idpf_init_hard_reset(struct idpf_adapter *adapter)18841819{18851820 struct idpf_reg_ops *reg_ops = &adapter->dev_ops.reg_ops;18861821 struct device *dev = &adapter->pdev->dev;18871887- struct net_device *netdev;18881822 int err;18891889- u16 i;1890182318241824+ idpf_detach_and_close(adapter);18911825 mutex_lock(&adapter->vport_ctrl_lock);1892182618931827 dev_info(dev, "Device HW Reset initiated\n");1894182818951895- /* Avoid TX hangs on reset */18961896- for (i = 0; i < adapter->max_vports; i++) {18971897- netdev = adapter->netdevs[i];18981898- if (!netdev)18991899- continue;19001900-19011901- netif_carrier_off(netdev);19021902- netif_tx_disable(netdev);19031903- }19041904-19051829 /* Prepare for reset */19061906- if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {18301830+ if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {19071831 reg_ops->trigger_reset(adapter, IDPF_HR_DRV_LOAD);19081832 } else if (test_and_clear_bit(IDPF_HR_FUNC_RESET, adapter->flags)) {19091833 bool is_reset = idpf_is_reset_detected(adapter);1910183419111835 idpf_idc_issue_reset_event(adapter->cdev_info);1912183619131913- idpf_set_vport_state(adapter);19141837 idpf_vc_core_deinit(adapter);19151838 if (!is_reset)19161839 reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET);···19451892unlock_mutex:19461893 mutex_unlock(&adapter->vport_ctrl_lock);1947189419481948- /* Wait until all vports are created to init RDMA CORE AUX */19491949- if (!err)19501950- err = idpf_idc_init(adapter);19511951-19521952- return err;18951895+ /* Attempt to restore netdevs and initialize RDMA CORE AUX device,18961896+ * provided vc_core_init succeeded. It is still possible that18971897+ * vports are not allocated at this point if the init task failed.18981898+ */18991899+ if (!err) {19001900+ idpf_attach_and_open(adapter);19011901+ idpf_idc_init(adapter);19021902+ }19531903}1954190419551905/**···20531997 idpf_vport_stop(vport, false);20541998 }2055199920562056- idpf_deinit_rss(vport);20572000 /* We're passing in vport here because we need its wait_queue20582001 * to send a message and it should be getting all the vport20592002 * config data out of the adapter but we need to be careful not···20772022 err = idpf_set_real_num_queues(vport);20782023 if (err)20792024 goto err_open;20252025+20262026+ if (reset_cause == IDPF_SR_Q_CHANGE &&20272027+ !netif_is_rxfh_configured(vport->netdev))20282028+ idpf_fill_dflt_rss_lut(vport);2080202920812030 if (vport_is_up)20822031 err = idpf_vport_open(vport, false);···22252166}2226216722272168/**22282228- * idpf_vport_manage_rss_lut - disable/enable RSS22292229- * @vport: the vport being changed22302230- *22312231- * In the event of disable request for RSS, this function will zero out RSS22322232- * LUT, while in the event of enable request for RSS, it will reconfigure RSS22332233- * LUT with the default LUT configuration.22342234- */22352235-static int idpf_vport_manage_rss_lut(struct idpf_vport *vport)22362236-{22372237- bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);22382238- struct idpf_rss_data *rss_data;22392239- u16 idx = vport->idx;22402240- int lut_size;22412241-22422242- rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data;22432243- lut_size = rss_data->rss_lut_size * sizeof(u32);22442244-22452245- if (ena) {22462246- /* This will contain the default or user configured LUT */22472247- memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size);22482248- } else {22492249- /* Save a copy of the current LUT to be restored later if22502250- * requested.22512251- */22522252- memcpy(rss_data->cached_lut, rss_data->rss_lut, lut_size);22532253-22542254- /* Zero out the current LUT to disable */22552255- memset(rss_data->rss_lut, 0, lut_size);22562256- }22572257-22582258- return idpf_config_rss(vport);22592259-}22602260-22612261-/**22622169 * idpf_set_features - set the netdev feature flags22632170 * @netdev: ptr to the netdev being adjusted22642171 * @features: the feature set that the stack is suggesting···22492224 }2250222522512226 if (changed & NETIF_F_RXHASH) {22272227+ struct idpf_netdev_priv *np = netdev_priv(netdev);22282228+22522229 netdev->features ^= NETIF_F_RXHASH;22532253- err = idpf_vport_manage_rss_lut(vport);22542254- if (err)22552255- goto unlock_mutex;22302230+22312231+ /* If the interface is not up when changing the rxhash, update22322232+ * to the HW is skipped. The updated LUT will be committed to22332233+ * the HW when the interface is brought up.22342234+ */22352235+ if (test_bit(IDPF_VPORT_UP, np->state)) {22362236+ err = idpf_config_rss(vport);22372237+ if (err)22382238+ goto unlock_mutex;22392239+ }22562240 }2257224122582242 if (changed & NETIF_F_GRO_HW) {
···28042804 * @vport: virtual port data structure28052805 * @get: flag to set or get rss look up table28062806 *28072807+ * When rxhash is disabled, RSS LUT will be configured with zeros. If rxhash28082808+ * is enabled, the LUT values stored in driver's soft copy will be used to setup28092809+ * the HW.28102810+ *28072811 * Returns 0 on success, negative on failure.28082812 */28092813int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get)···28182814 struct idpf_rss_data *rss_data;28192815 int buf_size, lut_buf_size;28202816 ssize_t reply_sz;28172817+ bool rxhash_ena;28212818 int i;2822281928232820 rss_data =28242821 &vport->adapter->vport_config[vport->idx]->user_config.rss_data;28222822+ rxhash_ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);28252823 buf_size = struct_size(rl, lut, rss_data->rss_lut_size);28262824 rl = kzalloc(buf_size, GFP_KERNEL);28272825 if (!rl)···28452839 } else {28462840 rl->lut_entries = cpu_to_le16(rss_data->rss_lut_size);28472841 for (i = 0; i < rss_data->rss_lut_size; i++)28482848- rl->lut[i] = cpu_to_le32(rss_data->rss_lut[i]);28422842+ rl->lut[i] = rxhash_ena ?28432843+ cpu_to_le32(rss_data->rss_lut[i]) : 0;2849284428502845 xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_LUT;28512846 }···35773570 */35783571void idpf_vc_core_deinit(struct idpf_adapter *adapter)35793572{35733573+ struct idpf_hw *hw = &adapter->hw;35803574 bool remove_in_prog;3581357535823576 if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))···36003592 cancel_delayed_work_sync(&adapter->mbx_task);3601359336023594 idpf_vport_params_buf_rel(adapter);35953595+35963596+ kfree(hw->lan_regs);35973597+ hw->lan_regs = NULL;3603359836043599 kfree(adapter->vports);36053600 adapter->vports = NULL;
···63086308DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node);63096309DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node);63106310DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node);63116311-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RPI, PCI_DEVICE_ID_RPI_RP1_C0, of_pci_make_dev_node);6312631163136312/*63146313 * Devices known to require a longer delay before first config space access
-7
drivers/pci/vgaarb.c
···652652 return true;653653 }654654655655- /*656656- * Vgadev has neither IO nor MEM enabled. If we haven't found any657657- * other VGA devices, it is the best candidate so far.658658- */659659- if (!boot_vga)660660- return true;661661-662655 return false;663656}664657
+1
drivers/pinctrl/Kconfig
···491491 depends on ARCH_MICROCHIP || COMPILE_TEST492492 depends on OF493493 select GENERIC_PINCONF494494+ select REGMAP_MMIO494495 default y495496 help496497 This selects the pinctrl driver for gpio2 on pic64gx.
···33 * drivers/uio/uio.c44 *55 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de>66- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de>88 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com>99 *
+7-6
drivers/xen/acpi.c
···8989 int *trigger_out,9090 int *polarity_out)9191{9292- int gsi;9292+ u32 gsi;9393 u8 pin;9494 struct acpi_prt_entry *entry;9595 int trigger = ACPI_LEVEL_SENSITIVE;9696- int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?9696+ int ret, polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?9797 ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW;98989999 if (!dev || !gsi_out || !trigger_out || !polarity_out)···105105106106 entry = acpi_pci_irq_lookup(dev, pin);107107 if (entry) {108108+ ret = 0;108109 if (entry->link)109109- gsi = acpi_pci_link_allocate_irq(entry->link,110110+ ret = acpi_pci_link_allocate_irq(entry->link,110111 entry->index,111112 &trigger, &polarity,112112- NULL);113113+ NULL, &gsi);113114 else114115 gsi = entry->index;115116 } else116116- gsi = -1;117117+ ret = -ENODEV;117118118118- if (gsi < 0)119119+ if (ret < 0)119120 return -EINVAL;120121121122 *gsi_out = gsi;
+17-15
fs/btrfs/delayed-inode.c
···152152 return ERR_PTR(-ENOMEM);153153 btrfs_init_delayed_node(node, root, ino);154154155155+ /* Cached in the inode and can be accessed. */156156+ refcount_set(&node->refs, 2);157157+ btrfs_delayed_node_ref_tracker_alloc(node, tracker, GFP_NOFS);158158+ btrfs_delayed_node_ref_tracker_alloc(node, &node->inode_cache_tracker, GFP_NOFS);159159+155160 /* Allocate and reserve the slot, from now it can return a NULL from xa_load(). */156161 ret = xa_reserve(&root->delayed_nodes, ino, GFP_NOFS);157157- if (ret == -ENOMEM) {158158- btrfs_delayed_node_ref_tracker_dir_exit(node);159159- kmem_cache_free(delayed_node_cache, node);160160- return ERR_PTR(-ENOMEM);161161- }162162+ if (ret == -ENOMEM)163163+ goto cleanup;164164+162165 xa_lock(&root->delayed_nodes);163166 ptr = xa_load(&root->delayed_nodes, ino);164167 if (ptr) {165168 /* Somebody inserted it, go back and read it. */166169 xa_unlock(&root->delayed_nodes);167167- btrfs_delayed_node_ref_tracker_dir_exit(node);168168- kmem_cache_free(delayed_node_cache, node);169169- node = NULL;170170- goto again;170170+ goto cleanup;171171 }172172 ptr = __xa_store(&root->delayed_nodes, ino, node, GFP_ATOMIC);173173 ASSERT(xa_err(ptr) != -EINVAL);174174 ASSERT(xa_err(ptr) != -ENOMEM);175175 ASSERT(ptr == NULL);176176-177177- /* Cached in the inode and can be accessed. */178178- refcount_set(&node->refs, 2);179179- btrfs_delayed_node_ref_tracker_alloc(node, tracker, GFP_ATOMIC);180180- btrfs_delayed_node_ref_tracker_alloc(node, &node->inode_cache_tracker, GFP_ATOMIC);181181-182176 btrfs_inode->delayed_node = node;183177 xa_unlock(&root->delayed_nodes);184178185179 return node;180180+cleanup:181181+ btrfs_delayed_node_ref_tracker_free(node, tracker);182182+ btrfs_delayed_node_ref_tracker_free(node, &node->inode_cache_tracker);183183+ btrfs_delayed_node_ref_tracker_dir_exit(node);184184+ kmem_cache_free(delayed_node_cache, node);185185+ if (ret)186186+ return ERR_PTR(ret);187187+ goto again;186188}187189188190/*
+1
fs/btrfs/disk-io.c
···22552255 BTRFS_DATA_RELOC_TREE_OBJECTID, true);22562256 if (IS_ERR(root)) {22572257 if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) {22582258+ location.objectid = BTRFS_DATA_RELOC_TREE_OBJECTID;22582259 ret = PTR_ERR(root);22592260 goto out;22602261 }
+4-4
fs/btrfs/extent_io.c
···17281728 struct btrfs_ordered_extent *ordered;1729172917301730 ordered = btrfs_lookup_first_ordered_range(inode, cur,17311731- folio_end - cur);17311731+ fs_info->sectorsize);17321732 /*17331733 * We have just run delalloc before getting here, so17341734 * there must be an ordered extent.···17421742 btrfs_put_ordered_extent(ordered);1743174317441744 btrfs_mark_ordered_io_finished(inode, folio, cur,17451745- end - cur, true);17451745+ fs_info->sectorsize, true);17461746 /*17471747 * This range is beyond i_size, thus we don't need to17481748 * bother writing back.···17511751 * writeback the sectors with subpage dirty bits,17521752 * causing writeback without ordered extent.17531753 */17541754- btrfs_folio_clear_dirty(fs_info, folio, cur, end - cur);17551755- break;17541754+ btrfs_folio_clear_dirty(fs_info, folio, cur, fs_info->sectorsize);17551755+ continue;17561756 }17571757 ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size);17581758 if (unlikely(ret < 0)) {
+47-16
fs/btrfs/inode.c
···481481 ASSERT(size <= sectorsize);482482483483 /*484484- * The compressed size also needs to be no larger than a sector.485485- * That's also why we only need one page as the parameter.484484+ * The compressed size also needs to be no larger than a page.485485+ * That's also why we only need one folio as the parameter.486486 */487487- if (compressed_folio)487487+ if (compressed_folio) {488488 ASSERT(compressed_size <= sectorsize);489489- else489489+ ASSERT(compressed_size <= PAGE_SIZE);490490+ } else {490491 ASSERT(compressed_size == 0);492492+ }491493492494 if (compressed_size && compressed_folio)493495 cur_size = compressed_size;···576574 if (offset != 0)577575 return false;578576577577+ /*578578+ * Even for bs > ps cases, cow_file_range_inline() can only accept a579579+ * single folio.580580+ *581581+ * This can be problematic and cause access beyond page boundary if a582582+ * page sized folio is passed into that function.583583+ * And encoded write is doing exactly that.584584+ * So here limits the inlined extent size to PAGE_SIZE.585585+ */586586+ if (size > PAGE_SIZE || compressed_size > PAGE_SIZE)587587+ return false;588588+579589 /* Inline extents are limited to sectorsize. */580590 if (size > fs_info->sectorsize)581591 return false;···632618 struct btrfs_drop_extents_args drop_args = { 0 };633619 struct btrfs_root *root = inode->root;634620 struct btrfs_fs_info *fs_info = root->fs_info;635635- struct btrfs_trans_handle *trans;621621+ struct btrfs_trans_handle *trans = NULL;636622 u64 data_len = (compressed_size ?: size);637623 int ret;638624 struct btrfs_path *path;639625640626 path = btrfs_alloc_path();641641- if (!path)642642- return -ENOMEM;627627+ if (!path) {628628+ ret = -ENOMEM;629629+ goto out;630630+ }643631644632 trans = btrfs_join_transaction(root);645633 if (IS_ERR(trans)) {646646- btrfs_free_path(path);647647- return PTR_ERR(trans);634634+ ret = PTR_ERR(trans);635635+ trans = NULL;636636+ goto out;648637 }649638 trans->block_rsv = &inode->block_rsv;650639···691674 * it won't count as data extent, free them directly here.692675 * And at reserve time, it's always aligned to page size, so693676 * just free one page here.677677+ *678678+ * If we fallback to non-inline (ret == 1) due to -ENOSPC, then we need679679+ * to keep the data reservation.694680 */695695- btrfs_qgroup_free_data(inode, NULL, 0, fs_info->sectorsize, NULL);681681+ if (ret <= 0)682682+ btrfs_qgroup_free_data(inode, NULL, 0, fs_info->sectorsize, NULL);696683 btrfs_free_path(path);697697- btrfs_end_transaction(trans);684684+ if (trans)685685+ btrfs_end_transaction(trans);698686 return ret;699687}700688···40484026 btrfs_set_inode_mapping_order(inode);4049402740504028cache_index:40514051- ret = btrfs_init_file_extent_tree(inode);40524052- if (ret)40534053- goto out;40544054- btrfs_inode_set_file_extent_range(inode, 0,40554055- round_up(i_size_read(vfs_inode), fs_info->sectorsize));40564029 /*40574030 * If we were modified in the current generation and evicted from memory40584031 * and then re-read we need to do a full sync since we don't have any···41334116 "error loading props for ino %llu (root %llu): %d",41344117 btrfs_ino(inode), btrfs_root_id(root), ret);41354118 }41194119+41204120+ /*41214121+ * We don't need the path anymore, so release it to avoid holding a read41224122+ * lock on a leaf while calling btrfs_init_file_extent_tree(), which can41234123+ * allocate memory that triggers reclaim (GFP_KERNEL) and cause a locking41244124+ * dependency.41254125+ */41264126+ btrfs_release_path(path);41274127+41284128+ ret = btrfs_init_file_extent_tree(inode);41294129+ if (ret)41304130+ goto out;41314131+ btrfs_inode_set_file_extent_range(inode, 0,41324132+ round_up(i_size_read(vfs_inode), fs_info->sectorsize));4136413341374134 if (!maybe_acls)41384135 cache_no_acl(vfs_inode);
···736736 */737737void btrfs_set_free_space_cache_settings(struct btrfs_fs_info *fs_info)738738{739739- if (fs_info->sectorsize < PAGE_SIZE) {739739+ if (fs_info->sectorsize != PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) {740740+ btrfs_info(fs_info,741741+ "forcing free space tree for sector size %u with page size %lu",742742+ fs_info->sectorsize, PAGE_SIZE);740743 btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE);741741- if (!btrfs_test_opt(fs_info, FREE_SPACE_TREE)) {742742- btrfs_info(fs_info,743743- "forcing free space tree for sector size %u with page size %lu",744744- fs_info->sectorsize, PAGE_SIZE);745745- btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE);746746- }744744+ btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE);747745 }748746749747 /*
+6-5
fs/btrfs/transaction.c
···520520 * when this is done, it is safe to start a new transaction, but the current521521 * transaction might not be fully on disk.522522 */523523-static void wait_current_trans(struct btrfs_fs_info *fs_info)523523+static void wait_current_trans(struct btrfs_fs_info *fs_info, unsigned int type)524524{525525 struct btrfs_transaction *cur_trans;526526527527 spin_lock(&fs_info->trans_lock);528528 cur_trans = fs_info->running_transaction;529529- if (cur_trans && is_transaction_blocked(cur_trans)) {529529+ if (cur_trans && is_transaction_blocked(cur_trans) &&530530+ (btrfs_blocked_trans_types[cur_trans->state] & type)) {530531 refcount_inc(&cur_trans->use_count);531532 spin_unlock(&fs_info->trans_lock);532533···702701 sb_start_intwrite(fs_info->sb);703702704703 if (may_wait_transaction(fs_info, type))705705- wait_current_trans(fs_info);704704+ wait_current_trans(fs_info, type);706705707706 do {708707 ret = join_transaction(fs_info, type);709708 if (ret == -EBUSY) {710710- wait_current_trans(fs_info);709709+ wait_current_trans(fs_info, type);711710 if (unlikely(type == TRANS_ATTACH ||712711 type == TRANS_JOIN_NOSTART))713712 ret = -ENOENT;···1004100310051004void btrfs_throttle(struct btrfs_fs_info *fs_info)10061005{10071007- wait_current_trans(fs_info);10061006+ wait_current_trans(fs_info, TRANS_START);10081007}1009100810101009bool btrfs_should_end_transaction(struct btrfs_trans_handle *trans)
+3-5
fs/btrfs/tree-log.c
···190190191191 btrfs_abort_transaction(wc->trans, error);192192193193- if (wc->subvol_path->nodes[0]) {193193+ if (wc->subvol_path && wc->subvol_path->nodes[0]) {194194 btrfs_crit(fs_info,195195 "subvolume (root %llu) leaf currently being processed:",196196 btrfs_root_id(wc->root));···63416341 * and no keys greater than that, so bail out.63426342 */63436343 break;63446344- } else if ((min_key->type == BTRFS_INODE_REF_KEY ||63456345- min_key->type == BTRFS_INODE_EXTREF_KEY) &&63466346- (inode->generation == trans->transid ||63476347- ctx->logging_conflict_inodes)) {63446344+ } else if (min_key->type == BTRFS_INODE_REF_KEY ||63456345+ min_key->type == BTRFS_INODE_EXTREF_KEY) {63486346 u64 other_ino = 0;63496347 u64 other_parent = 0;63506348
···644644 * fs contexts (including its own) due to self-controlled RO645645 * accesses/contexts and no side-effect changes that need to646646 * context save & restore so it can reuse the current thread647647- * context. However, it still needs to bump `s_stack_depth` to648648- * avoid kernel stack overflow from nested filesystems.647647+ * context.648648+ * However, we still need to prevent kernel stack overflow due649649+ * to filesystem nesting: just ensure that s_stack_depth is 0650650+ * to disallow mounting EROFS on stacked filesystems.651651+ * Note: s_stack_depth is not incremented here for now, since652652+ * EROFS is the only fs supporting file-backed mounts for now.653653+ * It MUST change if another fs plans to support them, which654654+ * may also require adjusting FILESYSTEM_MAX_STACK_DEPTH.649655 */650656 if (erofs_is_fileio_mode(sbi)) {651651- sb->s_stack_depth =652652- file_inode(sbi->dif0.file)->i_sb->s_stack_depth + 1;653653- if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) {654654- erofs_err(sb, "maximum fs stacking depth exceeded");657657+ inode = file_inode(sbi->dif0.file);658658+ if ((inode->i_sb->s_op == &erofs_sops &&659659+ !inode->i_sb->s_bdev) ||660660+ inode->i_sb->s_stack_depth) {661661+ erofs_err(sb, "file-backed mounts cannot be applied to stacked fses");655662 return -ENOTBLK;656663 }657664 }
+3
fs/inode.c
···15931593 * @hashval: hash value (usually inode number) to search for15941594 * @test: callback used for comparisons between inodes15951595 * @data: opaque data pointer to pass to @test15961596+ * @isnew: return argument telling whether I_NEW was set when15971597+ * the inode was found in hash (the caller needs to15981598+ * wait for I_NEW to clear)15961599 *15971600 * Search for the inode specified by @hashval and @data in the inode cache.15981601 * If the inode is in the cache, the inode is returned with an incremented
+35-15
fs/iomap/buffered-io.c
···832832 if (!mapping_large_folio_support(iter->inode->i_mapping))833833 len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos));834834835835- if (iter->fbatch) {835835+ if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) {836836 struct folio *folio = folio_batch_next(iter->fbatch);837837838838 if (!folio)···929929 * process so return and let the caller iterate and refill the batch.930930 */931931 if (!folio) {932932- WARN_ON_ONCE(!iter->fbatch);932932+ WARN_ON_ONCE(!(iter->iomap.flags & IOMAP_F_FOLIO_BATCH));933933 return 0;934934 }935935···15441544 return status;15451545}1546154615471547-loff_t15471547+/**15481548+ * iomap_fill_dirty_folios - fill a folio batch with dirty folios15491549+ * @iter: Iteration structure15501550+ * @start: Start offset of range. Updated based on lookup progress.15511551+ * @end: End offset of range15521552+ * @iomap_flags: Flags to set on the associated iomap to track the batch.15531553+ *15541554+ * Returns the folio count directly. Also returns the associated control flag if15551555+ * the the batch lookup is performed and the expected offset of a subsequent15561556+ * lookup via out params. The caller is responsible to set the flag on the15571557+ * associated iomap.15581558+ */15591559+unsigned int15481560iomap_fill_dirty_folios(15491561 struct iomap_iter *iter,15501550- loff_t offset,15511551- loff_t length)15621562+ loff_t *start,15631563+ loff_t end,15641564+ unsigned int *iomap_flags)15521565{15531566 struct address_space *mapping = iter->inode->i_mapping;15541554- pgoff_t start = offset >> PAGE_SHIFT;15551555- pgoff_t end = (offset + length - 1) >> PAGE_SHIFT;15671567+ pgoff_t pstart = *start >> PAGE_SHIFT;15681568+ pgoff_t pend = (end - 1) >> PAGE_SHIFT;15691569+ unsigned int count;1556157015571557- iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL);15581558- if (!iter->fbatch)15591559- return offset + length;15601560- folio_batch_init(iter->fbatch);15711571+ if (!iter->fbatch) {15721572+ *start = end;15731573+ return 0;15741574+ }1561157515621562- filemap_get_folios_dirty(mapping, &start, end, iter->fbatch);15631563- return (start << PAGE_SHIFT);15761576+ count = filemap_get_folios_dirty(mapping, &pstart, pend, iter->fbatch);15771577+ *start = (pstart << PAGE_SHIFT);15781578+ *iomap_flags |= IOMAP_F_FOLIO_BATCH;15791579+ return count;15641580}15651581EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios);15661582···15851569 const struct iomap_ops *ops,15861570 const struct iomap_write_ops *write_ops, void *private)15871571{15721572+ struct folio_batch fbatch;15881573 struct iomap_iter iter = {15891574 .inode = inode,15901575 .pos = pos,15911576 .len = len,15921577 .flags = IOMAP_ZERO,15931578 .private = private,15791579+ .fbatch = &fbatch,15941580 };15951581 struct address_space *mapping = inode->i_mapping;15961582 int ret;15971583 bool range_dirty;15841584+15851585+ folio_batch_init(&fbatch);1598158615991587 /*16001588 * To avoid an unconditional flush, check pagecache state and only flush···16101590 while ((ret = iomap_iter(&iter, ops)) > 0) {16111591 const struct iomap *srcmap = iomap_iter_srcmap(&iter);1612159216131613- if (WARN_ON_ONCE(iter.fbatch &&15931593+ if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&16141594 srcmap->type != IOMAP_UNWRITTEN))16151595 return -EIO;1616159616171617- if (!iter.fbatch &&15971597+ if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&16181598 (srcmap->type == IOMAP_HOLE ||16191599 srcmap->type == IOMAP_UNWRITTEN)) {16201600 s64 status;
···369369 while (!list_empty(dispose)) {370370 flc = list_first_entry(dispose, struct file_lock_core, flc_list);371371 list_del_init(&flc->flc_list);372372- if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))373373- locks_free_lease(file_lease(flc));374374- else375375- locks_free_lock(file_lock(flc));372372+ locks_free_lock(file_lock(flc));373373+ }374374+}375375+376376+static void377377+lease_dispose_list(struct list_head *dispose)378378+{379379+ struct file_lock_core *flc;380380+381381+ while (!list_empty(dispose)) {382382+ flc = list_first_entry(dispose, struct file_lock_core, flc_list);383383+ list_del_init(&flc->flc_list);384384+ locks_free_lease(file_lease(flc));376385 }377386}378387···585576 __f_setown(filp, task_pid(current), PIDTYPE_TGID, 0);586577}587578579579+/**580580+ * lease_open_conflict - see if the given file points to an inode that has581581+ * an existing open that would conflict with the582582+ * desired lease.583583+ * @filp: file to check584584+ * @arg: type of lease that we're trying to acquire585585+ *586586+ * Check to see if there's an existing open fd on this file that would587587+ * conflict with the lease we're trying to set.588588+ */589589+static int590590+lease_open_conflict(struct file *filp, const int arg)591591+{592592+ struct inode *inode = file_inode(filp);593593+ int self_wcount = 0, self_rcount = 0;594594+595595+ if (arg == F_RDLCK)596596+ return inode_is_open_for_write(inode) ? -EAGAIN : 0;597597+ else if (arg != F_WRLCK)598598+ return 0;599599+600600+ /*601601+ * Make sure that only read/write count is from lease requestor.602602+ * Note that this will result in denying write leases when i_writecount603603+ * is negative, which is what we want. (We shouldn't grant write leases604604+ * on files open for execution.)605605+ */606606+ if (filp->f_mode & FMODE_WRITE)607607+ self_wcount = 1;608608+ else if (filp->f_mode & FMODE_READ)609609+ self_rcount = 1;610610+611611+ if (atomic_read(&inode->i_writecount) != self_wcount ||612612+ atomic_read(&inode->i_readcount) != self_rcount)613613+ return -EAGAIN;614614+615615+ return 0;616616+}617617+588618static const struct lease_manager_operations lease_manager_ops = {589619 .lm_break = lease_break_callback,590620 .lm_change = lease_modify,591621 .lm_setup = lease_setup,622622+ .lm_open_conflict = lease_open_conflict,592623};593624594625/*···16691620 spin_unlock(&ctx->flc_lock);16701621 percpu_up_read(&file_rwsem);1671162216721672- locks_dispose_list(&dispose);16231623+ lease_dispose_list(&dispose);16731624 error = wait_event_interruptible_timeout(new_fl->c.flc_wait,16741625 list_empty(&new_fl->c.flc_blocked_member),16751626 break_time);···16921643out:16931644 spin_unlock(&ctx->flc_lock);16941645 percpu_up_read(&file_rwsem);16951695- locks_dispose_list(&dispose);16461646+ lease_dispose_list(&dispose);16961647free_lock:16971648 locks_free_lease(new_fl);16981649 return error;···17761727 spin_unlock(&ctx->flc_lock);17771728 percpu_up_read(&file_rwsem);1778172917791779- locks_dispose_list(&dispose);17301730+ lease_dispose_list(&dispose);17801731 }17811732 return type;17821733}···17911742 if (deleg->d_flags != 0 || deleg->__pad != 0)17921743 return -EINVAL;17931744 deleg->d_type = __fcntl_getlease(filp, FL_DELEG);17941794- return 0;17951795-}17961796-17971797-/**17981798- * check_conflicting_open - see if the given file points to an inode that has17991799- * an existing open that would conflict with the18001800- * desired lease.18011801- * @filp: file to check18021802- * @arg: type of lease that we're trying to acquire18031803- * @flags: current lock flags18041804- *18051805- * Check to see if there's an existing open fd on this file that would18061806- * conflict with the lease we're trying to set.18071807- */18081808-static int18091809-check_conflicting_open(struct file *filp, const int arg, int flags)18101810-{18111811- struct inode *inode = file_inode(filp);18121812- int self_wcount = 0, self_rcount = 0;18131813-18141814- if (flags & FL_LAYOUT)18151815- return 0;18161816- if (flags & FL_DELEG)18171817- /* We leave these checks to the caller */18181818- return 0;18191819-18201820- if (arg == F_RDLCK)18211821- return inode_is_open_for_write(inode) ? -EAGAIN : 0;18221822- else if (arg != F_WRLCK)18231823- return 0;18241824-18251825- /*18261826- * Make sure that only read/write count is from lease requestor.18271827- * Note that this will result in denying write leases when i_writecount18281828- * is negative, which is what we want. (We shouldn't grant write leases18291829- * on files open for execution.)18301830- */18311831- if (filp->f_mode & FMODE_WRITE)18321832- self_wcount = 1;18331833- else if (filp->f_mode & FMODE_READ)18341834- self_rcount = 1;18351835-18361836- if (atomic_read(&inode->i_writecount) != self_wcount ||18371837- atomic_read(&inode->i_readcount) != self_rcount)18381838- return -EAGAIN;18391839-18401745 return 0;18411746}18421747···18301827 percpu_down_read(&file_rwsem);18311828 spin_lock(&ctx->flc_lock);18321829 time_out_leases(inode, &dispose);18331833- error = check_conflicting_open(filp, arg, lease->c.flc_flags);18301830+ error = lease->fl_lmops->lm_open_conflict(filp, arg);18341831 if (error)18351832 goto out;18361833···18871884 * precedes these checks.18881885 */18891886 smp_mb();18901890- error = check_conflicting_open(filp, arg, lease->c.flc_flags);18871887+ error = lease->fl_lmops->lm_open_conflict(filp, arg);18911888 if (error) {18921889 locks_unlink_lock_ctx(&lease->c);18931890 goto out;···18991896out:19001897 spin_unlock(&ctx->flc_lock);19011898 percpu_up_read(&file_rwsem);19021902- locks_dispose_list(&dispose);18991899+ lease_dispose_list(&dispose);19031900 if (is_deleg)19041901 inode_unlock(inode);19051902 if (!error && !my_fl)···19351932 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);19361933 spin_unlock(&ctx->flc_lock);19371934 percpu_up_read(&file_rwsem);19381938- locks_dispose_list(&dispose);19351935+ lease_dispose_list(&dispose);19391936 return error;19401937}19411938···27382735 spin_unlock(&ctx->flc_lock);27392736 percpu_up_read(&file_rwsem);2740273727412741- locks_dispose_list(&dispose);27382738+ lease_dispose_list(&dispose);27422739}2743274027442741/*
+15-6
fs/namei.c
···830830static bool legitimize_links(struct nameidata *nd)831831{832832 int i;833833- if (unlikely(nd->flags & LOOKUP_CACHED)) {834834- drop_links(nd);835835- nd->depth = 0;836836- return false;837837- }833833+834834+ VFS_BUG_ON(nd->flags & LOOKUP_CACHED);835835+838836 for (i = 0; i < nd->depth; i++) {839837 struct saved *last = nd->stack + i;840838 if (unlikely(!legitimize_path(nd, &last->link, last->seq))) {···881883882884 BUG_ON(!(nd->flags & LOOKUP_RCU));883885886886+ if (unlikely(nd->flags & LOOKUP_CACHED)) {887887+ drop_links(nd);888888+ nd->depth = 0;889889+ goto out1;890890+ }884891 if (unlikely(nd->depth && !legitimize_links(nd)))885892 goto out1;886893 if (unlikely(!legitimize_path(nd, &nd->path, nd->seq)))···921918 int res;922919 BUG_ON(!(nd->flags & LOOKUP_RCU));923920921921+ if (unlikely(nd->flags & LOOKUP_CACHED)) {922922+ drop_links(nd);923923+ nd->depth = 0;924924+ goto out2;925925+ }924926 if (unlikely(nd->depth && !legitimize_links(nd)))925927 goto out2;926928 res = __legitimize_mnt(nd->path.mnt, nd->m_seq);···28442836}2845283728462838/**28472847- * start_dirop - begin a create or remove dirop, performing locking and lookup28392839+ * __start_dirop - begin a create or remove dirop, performing locking and lookup28482840 * @parent: the dentry of the parent in which the operation will occur28492841 * @name: a qstr holding the name within that parent28502842 * @lookup_flags: intent and other lookup flags.28432843+ * @state: task state bitmask28512844 *28522845 * The lookup is performed and necessary locks are taken so that, on success,28532846 * the returned dentry can be operated on safely.
···764764 return lease_modify(onlist, arg, dispose);765765}766766767767+/**768768+ * nfsd4_layout_lm_open_conflict - see if the given file points to an inode that has769769+ * an existing open that would conflict with the770770+ * desired lease.771771+ * @filp: file to check772772+ * @arg: type of lease that we're trying to acquire773773+ *774774+ * The kernel will call into this operation to determine whether there775775+ * are conflicting opens that may prevent the layout from being granted.776776+ * For nfsd, that check is done at a higher level, so this trivially777777+ * returns 0.778778+ */779779+static int780780+nfsd4_layout_lm_open_conflict(struct file *filp, int arg)781781+{782782+ return 0;783783+}784784+767785static const struct lease_manager_operations nfsd4_layouts_lm_ops = {768768- .lm_break = nfsd4_layout_lm_break,769769- .lm_change = nfsd4_layout_lm_change,786786+ .lm_break = nfsd4_layout_lm_break,787787+ .lm_change = nfsd4_layout_lm_change,788788+ .lm_open_conflict = nfsd4_layout_lm_open_conflict,770789};771790772791int
···8484/* forward declarations */8585static bool check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner);8686static void nfs4_free_ol_stateid(struct nfs4_stid *stid);8787-void nfsd4_end_grace(struct nfsd_net *nn);8787+static void nfsd4_end_grace(struct nfsd_net *nn);8888static void _free_cpntf_state_locked(struct nfsd_net *nn, struct nfs4_cpntf_state *cps);8989static void nfsd4_file_hash_remove(struct nfs4_file *fi);9090static void deleg_reaper(struct nfsd_net *nn);···1759175917601760/**17611761 * nfsd4_revoke_states - revoke all nfsv4 states associated with given filesystem17621762- * @net: used to identify instance of nfsd (there is one per net namespace)17621762+ * @nn: used to identify instance of nfsd (there is one per net namespace)17631763 * @sb: super_block used to identify target filesystem17641764 *17651765 * All nfs4 states (open, lock, delegation, layout) held by the server instance···17711771 * The clients which own the states will subsequently being notified that the17721772 * states have been "admin-revoked".17731773 */17741774-void nfsd4_revoke_states(struct net *net, struct super_block *sb)17741774+void nfsd4_revoke_states(struct nfsd_net *nn, struct super_block *sb)17751775{17761776- struct nfsd_net *nn = net_generic(net, nfsd_net_id);17771776 unsigned int idhashval;17781777 unsigned int sc_types;1779177817801779 sc_types = SC_TYPE_OPEN | SC_TYPE_LOCK | SC_TYPE_DELEG | SC_TYPE_LAYOUT;1781178017821781 spin_lock(&nn->client_lock);17831783- for (idhashval = 0; idhashval < CLIENT_HASH_MASK; idhashval++) {17821782+ for (idhashval = 0; idhashval < CLIENT_HASH_SIZE; idhashval++) {17841783 struct list_head *head = &nn->conf_id_hashtbl[idhashval];17851784 struct nfs4_client *clp;17861785 retry:···55555556 return -EAGAIN;55565557}5557555855595559+/**55605560+ * nfsd4_deleg_lm_open_conflict - see if the given file points to an inode that has55615561+ * an existing open that would conflict with the55625562+ * desired lease.55635563+ * @filp: file to check55645564+ * @arg: type of lease that we're trying to acquire55655565+ *55665566+ * The kernel will call into this operation to determine whether there55675567+ * are conflicting opens that may prevent the deleg from being granted.55685568+ * For nfsd, that check is done at a higher level, so this trivially55695569+ * returns 0.55705570+ */55715571+static int55725572+nfsd4_deleg_lm_open_conflict(struct file *filp, int arg)55735573+{55745574+ return 0;55755575+}55765576+55585577static const struct lease_manager_operations nfsd_lease_mng_ops = {55595578 .lm_breaker_owns_lease = nfsd_breaker_owns_lease,55605579 .lm_break = nfsd_break_deleg_cb,55615580 .lm_change = nfsd_change_deleg_cb,55815581+ .lm_open_conflict = nfsd4_deleg_lm_open_conflict,55625582};5563558355645584static __be32 nfsd4_check_seqid(struct nfsd4_compound_state *cstate, struct nfs4_stateowner *so, u32 seqid)···65886570 return nfs_ok;65896571}6590657265916591-void65736573+static void65926574nfsd4_end_grace(struct nfsd_net *nn)65936575{65946576 /* do nothing if grace period already ended */···66216603 */66226604}6623660566066606+/**66076607+ * nfsd4_force_end_grace - forcibly end the NFSv4 grace period66086608+ * @nn: network namespace for the server instance to be updated66096609+ *66106610+ * Forces bypass of normal grace period completion, then schedules66116611+ * the laundromat to end the grace period immediately. Does not wait66126612+ * for the grace period to fully terminate before returning.66136613+ *66146614+ * Return values:66156615+ * %true: Grace termination schedule66166616+ * %false: No action was taken66176617+ */66186618+bool nfsd4_force_end_grace(struct nfsd_net *nn)66196619+{66206620+ if (!nn->client_tracking_ops)66216621+ return false;66226622+ spin_lock(&nn->client_lock);66236623+ if (nn->grace_ended || !nn->client_tracking_active) {66246624+ spin_unlock(&nn->client_lock);66256625+ return false;66266626+ }66276627+ WRITE_ONCE(nn->grace_end_forced, true);66286628+ mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);66296629+ spin_unlock(&nn->client_lock);66306630+ return true;66316631+}66326632+66246633/*66256634 * If we've waited a lease period but there are still clients trying to66266635 * reclaim, wait a little longer to give them a chance to finish.···66576612 time64_t double_grace_period_end = nn->boot_time +66586613 2 * nn->nfsd4_lease;6659661466156615+ if (READ_ONCE(nn->grace_end_forced))66166616+ return false;66606617 if (nn->track_reclaim_completes &&66616618 atomic_read(&nn->nr_reclaim_complete) ==66626619 nn->reclaim_str_hashtbl_size)···89798932 nn->unconf_name_tree = RB_ROOT;89808933 nn->boot_time = ktime_get_real_seconds();89818934 nn->grace_ended = false;89358935+ nn->grace_end_forced = false;89368936+ nn->client_tracking_active = false;89828937 nn->nfsd4_manager.block_opens = true;89838938 INIT_LIST_HEAD(&nn->nfsd4_manager.list);89848939 INIT_LIST_HEAD(&nn->client_lru);···90619012 return ret;90629013 locks_start_grace(net, &nn->nfsd4_manager);90639014 nfsd4_client_tracking_init(net);90159015+ /* safe for laundromat to run now */90169016+ spin_lock(&nn->client_lock);90179017+ nn->client_tracking_active = true;90189018+ spin_unlock(&nn->client_lock);90649019 if (nn->track_reclaim_completes && nn->reclaim_str_hashtbl_size == 0)90659020 goto skip_grace;90669021 printk(KERN_INFO "NFSD: starting %lld-second grace period (net %x)\n",···9113906091149061 shrinker_free(nn->nfsd_client_shrinker);91159062 cancel_work_sync(&nn->nfsd_shrinker_work);90639063+ spin_lock(&nn->client_lock);90649064+ nn->client_tracking_active = false;90659065+ spin_unlock(&nn->client_lock);91169066 cancel_delayed_work_sync(&nn->laundromat_work);91179067 locks_end_grace(&nn->nfsd4_manager);91189068
+9-3
fs/nfsd/nfsctl.c
···259259 struct path path;260260 char *fo_path;261261 int error;262262+ struct nfsd_net *nn;262263263264 /* sanity check */264265 if (size == 0)···286285 * 3. Is that directory the root of an exported file system?287286 */288287 error = nlmsvc_unlock_all_by_sb(path.dentry->d_sb);289289- nfsd4_revoke_states(netns(file), path.dentry->d_sb);288288+ mutex_lock(&nfsd_mutex);289289+ nn = net_generic(netns(file), nfsd_net_id);290290+ if (nn->nfsd_serv)291291+ nfsd4_revoke_states(nn, path.dentry->d_sb);292292+ else293293+ error = -EINVAL;294294+ mutex_unlock(&nfsd_mutex);290295291296 path_put(&path);292297 return error;···10891082 case 'Y':10901083 case 'y':10911084 case '1':10921092- if (!nn->nfsd_serv)10851085+ if (!nfsd4_force_end_grace(nn))10931086 return -EBUSY;10941087 trace_nfsd_end_grace(netns(file));10951095- nfsd4_end_grace(nn);10961088 break;10971089 default:10981090 return -EINVAL;
···176176 /**177177 * @disable:178178 *179179- * The @disable callback should disable the bridge.179179+ * This callback should disable the bridge. It is called right before180180+ * the preceding element in the display pipe is disabled. If the181181+ * preceding element is a bridge this means it's called before that182182+ * bridge's @disable vfunc. If the preceding element is a &drm_encoder183183+ * it's called right before the &drm_encoder_helper_funcs.disable,184184+ * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms185185+ * hook.180186 *181187 * The bridge can assume that the display pipe (i.e. clocks and timing182188 * signals) feeding it is still running when this callback is called.183183- *184184- *185185- * If the preceding element is a &drm_bridge, then this is called before186186- * that bridge is disabled via one of:187187- *188188- * - &drm_bridge_funcs.disable189189- * - &drm_bridge_funcs.atomic_disable190190- *191191- * If the preceding element of the bridge is a display controller, then192192- * this callback is called before the encoder is disabled via one of:193193- *194194- * - &drm_encoder_helper_funcs.atomic_disable195195- * - &drm_encoder_helper_funcs.prepare196196- * - &drm_encoder_helper_funcs.disable197197- * - &drm_encoder_helper_funcs.dpms198198- *199199- * and the CRTC is disabled via one of:200200- *201201- * - &drm_crtc_helper_funcs.prepare202202- * - &drm_crtc_helper_funcs.atomic_disable203203- * - &drm_crtc_helper_funcs.disable204204- * - &drm_crtc_helper_funcs.dpms.205189 *206190 * The @disable callback is optional.207191 *···199215 /**200216 * @post_disable:201217 *218218+ * This callback should disable the bridge. It is called right after the219219+ * preceding element in the display pipe is disabled. If the preceding220220+ * element is a bridge this means it's called after that bridge's221221+ * @post_disable function. If the preceding element is a &drm_encoder222222+ * it's called right after the encoder's223223+ * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare224224+ * or &drm_encoder_helper_funcs.dpms hook.225225+ *202226 * The bridge must assume that the display pipe (i.e. clocks and timing203203- * signals) feeding this bridge is no longer running when the204204- * @post_disable is called.205205- *206206- * This callback should perform all the actions required by the hardware207207- * after it has stopped receiving signals from the preceding element.208208- *209209- * If the preceding element is a &drm_bridge, then this is called after210210- * that bridge is post-disabled (unless marked otherwise by the211211- * @pre_enable_prev_first flag) via one of:212212- *213213- * - &drm_bridge_funcs.post_disable214214- * - &drm_bridge_funcs.atomic_post_disable215215- *216216- * If the preceding element of the bridge is a display controller, then217217- * this callback is called after the encoder is disabled via one of:218218- *219219- * - &drm_encoder_helper_funcs.atomic_disable220220- * - &drm_encoder_helper_funcs.prepare221221- * - &drm_encoder_helper_funcs.disable222222- * - &drm_encoder_helper_funcs.dpms223223- *224224- * and the CRTC is disabled via one of:225225- *226226- * - &drm_crtc_helper_funcs.prepare227227- * - &drm_crtc_helper_funcs.atomic_disable228228- * - &drm_crtc_helper_funcs.disable229229- * - &drm_crtc_helper_funcs.dpms227227+ * signals) feeding it is no longer running when this callback is228228+ * called.230229 *231230 * The @post_disable callback is optional.232231 *···252285 /**253286 * @pre_enable:254287 *288288+ * This callback should enable the bridge. It is called right before289289+ * the preceding element in the display pipe is enabled. If the290290+ * preceding element is a bridge this means it's called before that291291+ * bridge's @pre_enable function. If the preceding element is a292292+ * &drm_encoder it's called right before the encoder's293293+ * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or294294+ * &drm_encoder_helper_funcs.dpms hook.295295+ *255296 * The display pipe (i.e. clocks and timing signals) feeding this bridge256256- * will not yet be running when the @pre_enable is called.257257- *258258- * This callback should perform all the necessary actions to prepare the259259- * bridge to accept signals from the preceding element.260260- *261261- * If the preceding element is a &drm_bridge, then this is called before262262- * that bridge is pre-enabled (unless marked otherwise by263263- * @pre_enable_prev_first flag) via one of:264264- *265265- * - &drm_bridge_funcs.pre_enable266266- * - &drm_bridge_funcs.atomic_pre_enable267267- *268268- * If the preceding element of the bridge is a display controller, then269269- * this callback is called before the CRTC is enabled via one of:270270- *271271- * - &drm_crtc_helper_funcs.atomic_enable272272- * - &drm_crtc_helper_funcs.commit273273- *274274- * and the encoder is enabled via one of:275275- *276276- * - &drm_encoder_helper_funcs.atomic_enable277277- * - &drm_encoder_helper_funcs.enable278278- * - &drm_encoder_helper_funcs.commit297297+ * will not yet be running when this callback is called. The bridge must298298+ * not enable the display link feeding the next bridge in the chain (if299299+ * there is one) when this callback is called.279300 *280301 * The @pre_enable callback is optional.281302 *···277322 /**278323 * @enable:279324 *280280- * The @enable callback should enable the bridge.325325+ * This callback should enable the bridge. It is called right after326326+ * the preceding element in the display pipe is enabled. If the327327+ * preceding element is a bridge this means it's called after that328328+ * bridge's @enable function. If the preceding element is a329329+ * &drm_encoder it's called right after the encoder's330330+ * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or331331+ * &drm_encoder_helper_funcs.dpms hook.281332 *282333 * The bridge can assume that the display pipe (i.e. clocks and timing283334 * signals) feeding it is running when this callback is called. This284335 * callback must enable the display link feeding the next bridge in the285336 * chain if there is one.286286- *287287- * If the preceding element is a &drm_bridge, then this is called after288288- * that bridge is enabled via one of:289289- *290290- * - &drm_bridge_funcs.enable291291- * - &drm_bridge_funcs.atomic_enable292292- *293293- * If the preceding element of the bridge is a display controller, then294294- * this callback is called after the CRTC is enabled via one of:295295- *296296- * - &drm_crtc_helper_funcs.atomic_enable297297- * - &drm_crtc_helper_funcs.commit298298- *299299- * and the encoder is enabled via one of:300300- *301301- * - &drm_encoder_helper_funcs.atomic_enable302302- * - &drm_encoder_helper_funcs.enable303303- * - drm_encoder_helper_funcs.commit304337 *305338 * The @enable callback is optional.306339 *···302359 /**303360 * @atomic_pre_enable:304361 *362362+ * This callback should enable the bridge. It is called right before363363+ * the preceding element in the display pipe is enabled. If the364364+ * preceding element is a bridge this means it's called before that365365+ * bridge's @atomic_pre_enable or @pre_enable function. If the preceding366366+ * element is a &drm_encoder it's called right before the encoder's367367+ * &drm_encoder_helper_funcs.atomic_enable hook.368368+ *305369 * The display pipe (i.e. clocks and timing signals) feeding this bridge306306- * will not yet be running when the @atomic_pre_enable is called.307307- *308308- * This callback should perform all the necessary actions to prepare the309309- * bridge to accept signals from the preceding element.310310- *311311- * If the preceding element is a &drm_bridge, then this is called before312312- * that bridge is pre-enabled (unless marked otherwise by313313- * @pre_enable_prev_first flag) via one of:314314- *315315- * - &drm_bridge_funcs.pre_enable316316- * - &drm_bridge_funcs.atomic_pre_enable317317- *318318- * If the preceding element of the bridge is a display controller, then319319- * this callback is called before the CRTC is enabled via one of:320320- *321321- * - &drm_crtc_helper_funcs.atomic_enable322322- * - &drm_crtc_helper_funcs.commit323323- *324324- * and the encoder is enabled via one of:325325- *326326- * - &drm_encoder_helper_funcs.atomic_enable327327- * - &drm_encoder_helper_funcs.enable328328- * - &drm_encoder_helper_funcs.commit370370+ * will not yet be running when this callback is called. The bridge must371371+ * not enable the display link feeding the next bridge in the chain (if372372+ * there is one) when this callback is called.329373 *330374 * The @atomic_pre_enable callback is optional.331375 */···322392 /**323393 * @atomic_enable:324394 *325325- * The @atomic_enable callback should enable the bridge.395395+ * This callback should enable the bridge. It is called right after396396+ * the preceding element in the display pipe is enabled. If the397397+ * preceding element is a bridge this means it's called after that398398+ * bridge's @atomic_enable or @enable function. If the preceding element399399+ * is a &drm_encoder it's called right after the encoder's400400+ * &drm_encoder_helper_funcs.atomic_enable hook.326401 *327402 * The bridge can assume that the display pipe (i.e. clocks and timing328403 * signals) feeding it is running when this callback is called. This329404 * callback must enable the display link feeding the next bridge in the330405 * chain if there is one.331331- *332332- * If the preceding element is a &drm_bridge, then this is called after333333- * that bridge is enabled via one of:334334- *335335- * - &drm_bridge_funcs.enable336336- * - &drm_bridge_funcs.atomic_enable337337- *338338- * If the preceding element of the bridge is a display controller, then339339- * this callback is called after the CRTC is enabled via one of:340340- *341341- * - &drm_crtc_helper_funcs.atomic_enable342342- * - &drm_crtc_helper_funcs.commit343343- *344344- * and the encoder is enabled via one of:345345- *346346- * - &drm_encoder_helper_funcs.atomic_enable347347- * - &drm_encoder_helper_funcs.enable348348- * - drm_encoder_helper_funcs.commit349406 *350407 * The @atomic_enable callback is optional.351408 */···341424 /**342425 * @atomic_disable:343426 *344344- * The @atomic_disable callback should disable the bridge.427427+ * This callback should disable the bridge. It is called right before428428+ * the preceding element in the display pipe is disabled. If the429429+ * preceding element is a bridge this means it's called before that430430+ * bridge's @atomic_disable or @disable vfunc. If the preceding element431431+ * is a &drm_encoder it's called right before the432432+ * &drm_encoder_helper_funcs.atomic_disable hook.345433 *346434 * The bridge can assume that the display pipe (i.e. clocks and timing347435 * signals) feeding it is still running when this callback is called.348348- *349349- * If the preceding element is a &drm_bridge, then this is called before350350- * that bridge is disabled via one of:351351- *352352- * - &drm_bridge_funcs.disable353353- * - &drm_bridge_funcs.atomic_disable354354- *355355- * If the preceding element of the bridge is a display controller, then356356- * this callback is called before the encoder is disabled via one of:357357- *358358- * - &drm_encoder_helper_funcs.atomic_disable359359- * - &drm_encoder_helper_funcs.prepare360360- * - &drm_encoder_helper_funcs.disable361361- * - &drm_encoder_helper_funcs.dpms362362- *363363- * and the CRTC is disabled via one of:364364- *365365- * - &drm_crtc_helper_funcs.prepare366366- * - &drm_crtc_helper_funcs.atomic_disable367367- * - &drm_crtc_helper_funcs.disable368368- * - &drm_crtc_helper_funcs.dpms.369436 *370437 * The @atomic_disable callback is optional.371438 */···359458 /**360459 * @atomic_post_disable:361460 *461461+ * This callback should disable the bridge. It is called right after the462462+ * preceding element in the display pipe is disabled. If the preceding463463+ * element is a bridge this means it's called after that bridge's464464+ * @atomic_post_disable or @post_disable function. If the preceding465465+ * element is a &drm_encoder it's called right after the encoder's466466+ * &drm_encoder_helper_funcs.atomic_disable hook.467467+ *362468 * The bridge must assume that the display pipe (i.e. clocks and timing363363- * signals) feeding this bridge is no longer running when the364364- * @atomic_post_disable is called.365365- *366366- * This callback should perform all the actions required by the hardware367367- * after it has stopped receiving signals from the preceding element.368368- *369369- * If the preceding element is a &drm_bridge, then this is called after370370- * that bridge is post-disabled (unless marked otherwise by the371371- * @pre_enable_prev_first flag) via one of:372372- *373373- * - &drm_bridge_funcs.post_disable374374- * - &drm_bridge_funcs.atomic_post_disable375375- *376376- * If the preceding element of the bridge is a display controller, then377377- * this callback is called after the encoder is disabled via one of:378378- *379379- * - &drm_encoder_helper_funcs.atomic_disable380380- * - &drm_encoder_helper_funcs.prepare381381- * - &drm_encoder_helper_funcs.disable382382- * - &drm_encoder_helper_funcs.dpms383383- *384384- * and the CRTC is disabled via one of:385385- *386386- * - &drm_crtc_helper_funcs.prepare387387- * - &drm_crtc_helper_funcs.atomic_disable388388- * - &drm_crtc_helper_funcs.disable389389- * - &drm_crtc_helper_funcs.dpms469469+ * signals) feeding it is no longer running when this callback is470470+ * called.390471 *391472 * The @atomic_post_disable callback is optional.392473 */
···11671167 */11681168struct ftrace_graph_ent {11691169 unsigned long func; /* Current function */11701170- unsigned long depth;11701170+ long depth; /* signed to check for less than zero */11711171} __packed;1172117211731173/*
+1-1
include/linux/hrtimer.h
···22/*33 * hrtimers - High-resolution kernel timers44 *55- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar77 *88 * data type definitions, declarations, prototypes
+6-2
include/linux/iomap.h
···8888/*8989 * Flags set by the core iomap code during operations:9090 *9191+ * IOMAP_F_FOLIO_BATCH indicates that the folio batch mechanism is active9292+ * for this operation, set by iomap_fill_dirty_folios().9393+ *9194 * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size9295 * has changed as the result of this write operation.9396 *···9895 * range it covers needs to be remapped by the high level before the operation9996 * can proceed.10097 */9898+#define IOMAP_F_FOLIO_BATCH (1U << 13)10199#define IOMAP_F_SIZE_CHANGED (1U << 14)102100#define IOMAP_F_STALE (1U << 15)103101···356352int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len,357353 const struct iomap_ops *ops,358354 const struct iomap_write_ops *write_ops);359359-loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset,360360- loff_t length);355355+unsigned int iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t *start,356356+ loff_t end, unsigned int *iomap_flags);361357int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,362358 bool *did_zero, const struct iomap_ops *ops,363359 const struct iomap_write_ops *write_ops, void *private);
+1-1
include/linux/ktime.h
···33 *44 * ktime_t - nanosecond-resolution time format.55 *66- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar88 *99 * data type definitions, declarations, prototypes and macros.
···22/*33 * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com>44 * David Woodhouse <dwmw2@infradead.org>55- * Thomas Gleixner <tglx@linutronix.de>55+ * Thomas Gleixner <tglx@kernel.org>66 *77 * This file is the header for the NAND Hamming ECC implementation.88 */
···1515 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>1616 *1717 * Scaled math optimizations by Thomas Gleixner1818- * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de>1818+ * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1919 *2020 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra2121 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1-1
kernel/sched/pelt.c
···1515 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>1616 *1717 * Scaled math optimizations by Thomas Gleixner1818- * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de>1818+ * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1919 *2020 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra2121 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1-1
kernel/time/clockevents.c
···22/*33 * This file contains functions which manage clock event devices.44 *55- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar77 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner88 */
+1-1
kernel/time/hrtimer.c
···11// SPDX-License-Identifier: GPL-2.022/*33- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>33+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>44 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar55 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner66 *
+1-1
kernel/time/tick-broadcast.c
···33 * This file contains functions which emulate a local clock-event44 * device via a broadcast event source.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-common.c
···33 * This file contains the base functions to manage periodic tick44 * related events.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-oneshot.c
···33 * This file contains functions which manage high resolution tick44 * related events.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-sched.c
···11// SPDX-License-Identifier: GPL-2.022/*33- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>33+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>44 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar55 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner66 *
···138138 * by commas.139139 */140140/* Set to string format zero to disable by default */141141-char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";141141+static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";142142143143/* When set, tracing will stop when a WARN*() is hit */144144static int __disable_trace_on_warning;···30123012 struct ftrace_stack *fstack;30133013 struct stack_entry *entry;30143014 int stackidx;30153015+ int bit;30163016+30173017+ bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START);30183018+ if (bit < 0)30193019+ return;3015302030163021 /*30173022 * Add one, for this function and the call to save_stack_trace()···30853080 /* Again, don't let gcc optimize things here */30863081 barrier();30873082 __this_cpu_dec(ftrace_stack_reserve);30833083+ trace_clear_recursion(bit);30883084}3089308530903086static inline void ftrace_trace_stack(struct trace_array *tr,
+3-4
kernel/trace/trace_events.c
···826826 * When soft_disable is set and enable is set, we want to827827 * register the tracepoint for the event, but leave the event828828 * as is. That means, if the event was already enabled, we do829829- * nothing (but set soft_mode). If the event is disabled, we830830- * set SOFT_DISABLED before enabling the event tracepoint, so831831- * it still seems to be disabled.829829+ * nothing. If the event is disabled, we set SOFT_DISABLED830830+ * before enabling the event tracepoint, so it still seems831831+ * to be disabled.832832 */833833 if (!soft_disable)834834 clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags);835835 else {836836 if (atomic_inc_return(&file->sm_ref) > 1)837837 break;838838- soft_mode = true;839838 /* Enable use of trace_buffered_event */840839 trace_buffered_event_enable();841840 }
···22/*33 * Generic infrastructure for lifetime debugging of objects.44 *55- * Copyright (C) 2008, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright (C) 2008, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 */7788#define pr_fmt(fmt) "ODEBUG: " fmt
+1-1
lib/plist.c
···1010 * 2001-2005 (c) MontaVista Software, Inc.1111 * Daniel Walker <dwalker@mvista.com>1212 *1313- * (C) 2005 Thomas Gleixner <tglx@linutronix.de>1313+ * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1414 *1515 * Simplifications of the original code by1616 * Oleg Nesterov <oleg@tv-sign.ru>
+1-1
lib/reed_solomon/decode_rs.c
···55 * Copyright 2002, Phil Karn, KA9Q66 * May be used under the terms of the GNU General Public License (GPL)77 *88- * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de)88+ * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org)99 *1010 * Generic data width independent code which is included by the wrappers.1111 */
+1-1
lib/reed_solomon/encode_rs.c
···55 * Copyright 2002, Phil Karn, KA9Q66 * May be used under the terms of the GNU General Public License (GPL)77 *88- * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de)88+ * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org)99 *1010 * Generic data width independent code which is included by the wrappers.1111 */
+1-1
lib/reed_solomon/reed_solomon.c
···22/*33 * Generic Reed Solomon encoder / decoder library44 *55- * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)55+ * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org)66 *77 * Reed Solomon code lifted from reed solomon library written by Phil Karn88 * Copyright 2002 Phil Karn, KA9Q
+7-4
net/bridge/br_vlan_tunnel.c
···189189 IP_TUNNEL_DECLARE_FLAGS(flags) = { };190190 struct metadata_dst *tunnel_dst;191191 __be64 tunnel_id;192192- int err;193192194193 if (!vlan)195194 return 0;···198199 return 0;199200200201 skb_dst_drop(skb);201201- err = skb_vlan_pop(skb);202202- if (err)203203- return err;202202+ /* For 802.1ad (QinQ), skb_vlan_pop() incorrectly moves the C-VLAN203203+ * from payload to hwaccel after clearing S-VLAN. We only need to204204+ * clear the hwaccel S-VLAN; the C-VLAN must stay in payload for205205+ * correct VXLAN encapsulation. This is also correct for 802.1Q206206+ * where no C-VLAN exists in payload.207207+ */208208+ __vlan_hwaccel_clear_tag(skb);204209205210 if (BR_INPUT_SKB_CB(skb)->backup_nhid) {206211 __set_bit(IP_TUNNEL_KEY_BIT, flags);
···38963896int sock_recv_errqueue(struct sock *sk, struct msghdr *msg, int len,38973897 int level, int type)38983898{38993899- struct sock_exterr_skb *serr;38993899+ struct sock_extended_err ee;39003900 struct sk_buff *skb;39013901 int copied, err;39023902···3916391639173917 sock_recv_timestamp(msg, sk, skb);3918391839193919- serr = SKB_EXT_ERR(skb);39203920- put_cmsg(msg, level, type, sizeof(serr->ee), &serr->ee);39193919+ /* We must use a bounce buffer for CONFIG_HARDENED_USERCOPY=y */39203920+ ee = SKB_EXT_ERR(skb)->ee;39213921+ put_cmsg(msg, level, type, sizeof(ee), &ee);3921392239223923 msg->msg_flags |= MSG_ERRQUEUE;39233924 err = copied;
+4-3
net/ipv4/arp.c
···564564565565 skb_reserve(skb, hlen);566566 skb_reset_network_header(skb);567567- arp = skb_put(skb, arp_hdr_len(dev));567567+ skb_put(skb, arp_hdr_len(dev));568568 skb->dev = dev;569569 skb->protocol = htons(ETH_P_ARP);570570 if (!src_hw)···572572 if (!dest_hw)573573 dest_hw = dev->broadcast;574574575575- /*576576- * Fill the device header for the ARP frame575575+ /* Fill the device header for the ARP frame.576576+ * Note: skb->head can be changed.577577 */578578 if (dev_hard_header(skb, dev, ptype, dest_hw, src_hw, skb->len) < 0)579579 goto out;580580581581+ arp = arp_hdr(skb);581582 /*582583 * Fill out the arp protocol part.583584 *
···9090 /* next (or first) interface */9191 iter->sdata = list_prepare_entry(iter->sdata, &local->interfaces, list);9292 list_for_each_entry_continue(iter->sdata, &local->interfaces, list) {9393+ if (!ieee80211_sdata_running(iter->sdata))9494+ continue;9595+9396 /* AP_VLAN has a chanctx pointer but follows AP */9497 if (iter->sdata->vif.type == NL80211_IFTYPE_AP_VLAN)9598 continue;
+4-3
net/mac80211/sta_info.c
···15331533 }15341534 }1535153515361536+ sinfo = kzalloc(sizeof(*sinfo), GFP_KERNEL);15371537+ if (sinfo)15381538+ sta_set_sinfo(sta, sinfo, true);15391539+15361540 if (sta->uploaded) {15371541 ret = drv_sta_state(local, sdata, sta, IEEE80211_STA_NONE,15381542 IEEE80211_STA_NOTEXIST);···1545154115461542 sta_dbg(sdata, "Removed STA %pM\n", sta->sta.addr);1547154315481548- sinfo = kzalloc(sizeof(*sinfo), GFP_KERNEL);15491549- if (sinfo)15501550- sta_set_sinfo(sta, sinfo, true);15511544 cfg80211_del_sta_sinfo(sdata->dev, sta->sta.addr, sinfo, GFP_KERNEL);15521545 kfree(sinfo);15531546
···8989 if (pf == NFPROTO_UNSPEC) {9090 for (i = NFPROTO_UNSPEC; i < NFPROTO_NUMPROTO; i++) {9191 if (rcu_access_pointer(loggers[i][logger->type])) {9292- ret = -EEXIST;9292+ ret = -EBUSY;9393 goto unlock;9494 }9595 }···9797 rcu_assign_pointer(loggers[i][logger->type], logger);9898 } else {9999 if (rcu_access_pointer(loggers[pf][logger->type])) {100100- ret = -EEXIST;100100+ ret = -EBUSY;101101 goto unlock;102102 }103103 rcu_assign_pointer(loggers[pf][logger->type], logger);
···17641764int xt_register_template(const struct xt_table *table,17651765 int (*table_init)(struct net *net))17661766{17671767- int ret = -EEXIST, af = table->af;17671767+ int ret = -EBUSY, af = table->af;17681768 struct xt_template *t;1769176917701770 mutex_lock(&xt[af].mutex);
+2
net/sched/act_api.c
···940940 int ret;941941942942 idr_for_each_entry_ul(idr, p, tmp, id) {943943+ if (IS_ERR(p))944944+ continue;943945 if (tc_act_in_hw(p) && !mutex_taken) {944946 rtnl_lock();945947 mutex_taken = true;
+13-13
net/sched/act_mirred.c
···266266 goto err_cant_do;267267 }268268269269- /* we could easily avoid the clone only if called by ingress and clsact;270270- * since we can't easily detect the clsact caller, skip clone only for271271- * ingress - that covers the TC S/W datapath.272272- */273273- at_ingress = skb_at_tc_ingress(skb);274274- dont_clone = skb_at_tc_ingress(skb) && is_redirect &&275275- tcf_mirred_can_reinsert(retval);276276- if (!dont_clone) {277277- skb_to_send = skb_clone(skb, GFP_ATOMIC);278278- if (!skb_to_send)279279- goto err_cant_do;280280- }281281-282269 want_ingress = tcf_mirred_act_wants_ingress(m_eaction);283270271271+ at_ingress = skb_at_tc_ingress(skb);284272 if (dev == skb->dev && want_ingress == at_ingress) {285273 pr_notice_once("tc mirred: Loop (%s:%s --> %s:%s)\n",286274 netdev_name(skb->dev),···276288 netdev_name(dev),277289 want_ingress ? "ingress" : "egress");278290 goto err_cant_do;291291+ }292292+293293+ /* we could easily avoid the clone only if called by ingress and clsact;294294+ * since we can't easily detect the clsact caller, skip clone only for295295+ * ingress - that covers the TC S/W datapath.296296+ */297297+ dont_clone = skb_at_tc_ingress(skb) && is_redirect &&298298+ tcf_mirred_can_reinsert(retval);299299+ if (!dont_clone) {300300+ skb_to_send = skb_clone(skb, GFP_ATOMIC);301301+ if (!skb_to_send)302302+ goto err_cant_do;279303 }280304281305 /* All mirred/redirected skbs should clear previous ct info */
+1-1
net/sched/sch_qfq.c
···1481148114821482 for (i = 0; i < q->clhash.hashsize; i++) {14831483 hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) {14841484- if (cl->qdisc->q.qlen > 0)14841484+ if (cl_is_active(cl))14851485 qfq_deactivate_class(q, cl);1486148614871487 qdisc_reset(cl->qdisc);
+3-5
net/unix/af_unix.c
···29042904 unsigned int last_len;29052905 struct unix_sock *u;29062906 int copied = 0;29072907- bool do_cmsg;29082907 int err = 0;29092908 long timeo;29102909 int target;···2929293029302931 u = unix_sk(sk);2931293229322932- do_cmsg = READ_ONCE(u->recvmsg_inq);29332933- if (do_cmsg)29342934- msg->msg_get_inq = 1;29352933redo:29362934 /* Lock the socket to prevent queue disordering29372935 * while sleeps in memcpy_tomsg···3086309030873091 mutex_unlock(&u->iolock);30883092 if (msg) {30933093+ bool do_cmsg = READ_ONCE(u->recvmsg_inq);30943094+30893095 scm_recv_unix(sock, msg, &scm, flags);3090309630913091- if (msg->msg_get_inq && (copied ?: err) >= 0) {30973097+ if ((do_cmsg | msg->msg_get_inq) && (copied ?: err) >= 0) {30923098 msg->msg_inq = READ_ONCE(u->inq_len);30933099 if (do_cmsg)30943100 put_cmsg(msg, SOL_SOCKET, SCM_INQ,
···14141515#[cfg(CONFIG_PRINTK)]1616use crate::c_str;1717-use crate::str::CStrExt as _;18171918pub mod property;2019···6667///6768/// # Implementing Bus Devices6869///6969-/// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or7070-/// [`platform::Device`].7070+/// This section provides a guideline to implement bus specific devices, such as:7171+#[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")]7272+/// * [`platform::Device`]7173///7274/// A bus specific device should be defined as follows.7375///···160160///161161/// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted162162/// [`impl_device_context_deref`]: kernel::impl_device_context_deref163163-/// [`pci::Device`]: kernel::pci::Device164163/// [`platform::Device`]: kernel::platform::Device165164#[repr(transparent)]166165pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>);
+1-1
rust/kernel/device_id.rs
···1515/// # Safety1616///1717/// Implementers must ensure that `Self` is layout-compatible with [`RawDeviceId::RawType`];1818-/// i.e. it's safe to transmute to `RawDeviceId`.1818+/// i.e. it's safe to transmute to `RawType`.1919///2020/// This requirement is needed so `IdArray::new` can convert `Self` to `RawType` when building2121/// the ID table.
+3-4
rust/kernel/dma.rs
···2727/// Trait to be implemented by DMA capable bus devices.2828///2929/// The [`dma::Device`](Device) trait should be implemented by bus specific device representations,3030-/// where the underlying bus is DMA capable, such as [`pci::Device`](::kernel::pci::Device) or3131-/// [`platform::Device`](::kernel::platform::Device).3030+/// where the underlying bus is DMA capable, such as:3131+#[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")]3232+/// * [`platform::Device`](::kernel::platform::Device)3233pub trait Device: AsRef<device::Device<Core>> {3334 /// Set up the device's DMA streaming addressing capabilities.3435 ///···533532 ///534533 /// # Safety535534 ///536536- /// * Callers must ensure that the device does not read/write to/from memory while the returned537537- /// slice is live.538535 /// * Callers must ensure that this call does not race with a read or write to the same region539536 /// that overlaps with this write.540537 ///
+8-4
rust/kernel/driver.rs
···3333//! }3434//! ```3535//!3636-//! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`].3636+//! For specific examples see:3737+//!3838+//! * [`platform::Driver`](kernel::platform::Driver)3939+#"4242+)]4343+#")]3744//!3845//! The `probe()` callback should return a `impl PinInit<Self, Error>`, i.e. the driver's private3946//! data. The bus abstraction should store the pointer in the corresponding bus device. The generic···8679//!8780//! For this purpose the generic infrastructure in [`device_id`] should be used.8881//!8989-//! [`auxiliary::Driver`]: kernel::auxiliary::Driver9082//! [`Core`]: device::Core9183//! [`Device`]: device::Device9284//! [`Device<Core>`]: device::Device<device::Core>···9387//! [`DeviceContext`]: device::DeviceContext9488//! [`device_id`]: kernel::device_id9589//! [`module_driver`]: kernel::module_driver9696-//! [`pci::Driver`]: kernel::pci::Driver9797-//! [`platform::Driver`]: kernel::platform::Driver98909991use crate::error::{Error, Result};10092use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+2-2
rust/kernel/pci/io.rs
···2020///2121/// # Invariants2222///2323-/// `Bar` always holds an `IoRaw` inststance that holds a valid pointer to the start of the I/O2323+/// `Bar` always holds an `IoRaw` instance that holds a valid pointer to the start of the I/O2424/// memory mapped PCI BAR and its size.2525pub struct Bar<const SIZE: usize = 0> {2626 pdev: ARef<Device>,···5454 let ioptr: usize = unsafe { bindings::pci_iomap(pdev.as_raw(), num, 0) } as usize;5555 if ioptr == 0 {5656 // SAFETY:5757- // `pdev` valid by the invariants of `Device`.5757+ // `pdev` is valid by the invariants of `Device`.5858 // `num` is checked for validity by a previous call to `Device::resource_len`.5959 unsafe { bindings::pci_release_region(pdev.as_raw(), num) };6060 return Err(ENOMEM);
···111111 sub = acpi_get_subsystem_id(ACPI_HANDLE(physdev));112112 if (IS_ERR(sub)) {113113 /* No subsys id in older tas2563 projects. */114114- if (!strncmp(hid, "INT8866", sizeof("INT8866")))114114+ if (!strncmp(hid, "INT8866", sizeof("INT8866"))) {115115+ p->speaker_id = -1;115116 goto end_2563;117117+ }116118 dev_err(p->dev, "Failed to get SUBSYS ID.\n");117119 ret = PTR_ERR(sub);118120 goto err;
+2-15
sound/soc/codecs/pm4125.c
···15051505 struct device_link *devlink;15061506 int ret;1507150715081508- /* Initialize device pointers to NULL for safe cleanup */15091509- pm4125->rxdev = NULL;15101510- pm4125->txdev = NULL;15111511-15121508 /* Give the soundwire subdevices some more time to settle */15131509 usleep_range(15000, 15010);15141510···1533153715341538 pm4125->sdw_priv[AIF1_CAP] = dev_get_drvdata(pm4125->txdev);15351539 pm4125->sdw_priv[AIF1_CAP]->pm4125 = pm4125;15361536-15371540 pm4125->tx_sdw_dev = dev_to_sdw_dev(pm4125->txdev);15381538- if (!pm4125->tx_sdw_dev) {15391539- dev_err(dev, "could not get txslave with matching of dev\n");15401540- ret = -EINVAL;15411541- goto error_put_tx;15421542- }1543154115441542 /*15451543 * As TX is the main CSR reg interface, which should not be suspended first.···16141624 device_link_remove(dev, pm4125->rxdev);16151625 device_link_remove(pm4125->rxdev, pm4125->txdev);1616162616171617- /* Release device references acquired in bind */16181618- if (pm4125->txdev)16191619- put_device(pm4125->txdev);16201620- if (pm4125->rxdev)16211621- put_device(pm4125->rxdev);16271627+ put_device(pm4125->txdev);16281628+ put_device(pm4125->rxdev);1622162916231630 component_unbind_all(dev, pm4125);16241631}
+141-8
sound/soc/codecs/tlv320adcx140.c
···22222323#include "tlv320adcx140.h"24242525+static const char *const adcx140_supply_names[] = {2626+ "avdd",2727+ "iovdd",2828+};2929+3030+#define ADCX140_NUM_SUPPLIES ARRAY_SIZE(adcx140_supply_names)3131+2532struct adcx140_priv {2626- struct snd_soc_component *component;2733 struct regulator *supply_areg;3434+ struct regulator_bulk_data supplies[ADCX140_NUM_SUPPLIES];2835 struct gpio_desc *gpio_reset;2936 struct regmap *regmap;3037 struct device *dev;···129122 { ADCX140_DEV_STS1, 0x80 },130123};131124125125+static const struct regmap_range adcx140_wr_ranges[] = {126126+ regmap_reg_range(ADCX140_PAGE_SELECT, ADCX140_SLEEP_CFG),127127+ regmap_reg_range(ADCX140_SHDN_CFG, ADCX140_SHDN_CFG),128128+ regmap_reg_range(ADCX140_ASI_CFG0, ADCX140_ASI_CFG2),129129+ regmap_reg_range(ADCX140_ASI_CH1, ADCX140_MST_CFG1),130130+ regmap_reg_range(ADCX140_CLK_SRC, ADCX140_CLK_SRC),131131+ regmap_reg_range(ADCX140_PDMCLK_CFG, ADCX140_GPO_CFG3),132132+ regmap_reg_range(ADCX140_GPO_VAL, ADCX140_GPO_VAL),133133+ regmap_reg_range(ADCX140_GPI_CFG0, ADCX140_GPI_CFG1),134134+ regmap_reg_range(ADCX140_GPI_MON, ADCX140_GPI_MON),135135+ regmap_reg_range(ADCX140_INT_CFG, ADCX140_INT_MASK0),136136+ regmap_reg_range(ADCX140_BIAS_CFG, ADCX140_CH4_CFG4),137137+ regmap_reg_range(ADCX140_CH5_CFG2, ADCX140_CH5_CFG4),138138+ regmap_reg_range(ADCX140_CH6_CFG2, ADCX140_CH6_CFG4),139139+ regmap_reg_range(ADCX140_CH7_CFG2, ADCX140_CH7_CFG4),140140+ regmap_reg_range(ADCX140_CH8_CFG2, ADCX140_CH8_CFG4),141141+ regmap_reg_range(ADCX140_DSP_CFG0, ADCX140_DRE_CFG0),142142+ regmap_reg_range(ADCX140_AGC_CFG0, ADCX140_AGC_CFG0),143143+ regmap_reg_range(ADCX140_IN_CH_EN, ADCX140_PWR_CFG),144144+ regmap_reg_range(ADCX140_PHASE_CALIB, ADCX140_PHASE_CALIB),145145+ regmap_reg_range(0x7e, 0x7e),146146+};147147+148148+static const struct regmap_access_table adcx140_wr_table = {149149+ .yes_ranges = adcx140_wr_ranges,150150+ .n_yes_ranges = ARRAY_SIZE(adcx140_wr_ranges),151151+};152152+132153static const struct regmap_range_cfg adcx140_ranges[] = {133154 {134155 .range_min = 0,···192157 .num_ranges = ARRAY_SIZE(adcx140_ranges),193158 .max_register = 12 * 128,194159 .volatile_reg = adcx140_volatile,160160+ .wr_table = &adcx140_wr_table,195161};196162197163/* Digital Volume control. From -100 to 27 dB in 0.5 dB steps */···221185static const struct snd_kcontrol_new decimation_filter_controls[] = {222186 SOC_DAPM_ENUM("Decimation Filter", decimation_filter_enum),223187};188188+189189+static const char * const channel_summation_text[] = {190190+ "Disabled", "2 Channel", "4 Channel"191191+};192192+193193+static SOC_ENUM_SINGLE_DECL(channel_summation_enum, ADCX140_DSP_CFG0, 2,194194+ channel_summation_text);224195225196static const char * const pdmclk_text[] = {226197 "2.8224 MHz", "1.4112 MHz", "705.6 kHz", "5.6448 MHz"···381338 SOC_DAPM_SINGLE("Switch", ADCX140_CH4_CFG0, 0, 1, 0);382339383340static const struct snd_kcontrol_new adcx140_dapm_dre_en_switch =384384- SOC_DAPM_SINGLE("Switch", ADCX140_DSP_CFG1, 3, 1, 0);341341+ SOC_DAPM_SINGLE("Switch", ADCX140_DSP_CFG1, 3, 1, 1);385342386343/* Output Mixer */387344static const struct snd_kcontrol_new adcx140_output_mixer_controls[] = {···716673 SOC_SINGLE_TLV("Digital CH8 Out Volume", ADCX140_CH8_CFG2,717674 0, 0xff, 0, dig_vol_tlv),718675 ADCX140_PHASE_CALIB_SWITCH("Phase Calibration Switch"),676676+677677+ SOC_SINGLE("Biquads Per Channel", ADCX140_DSP_CFG1, 5, 3, 0),678678+679679+ SOC_ENUM("Channel Summation", channel_summation_enum),719680};720681721682static int adcx140_reset(struct adcx140_priv *adcx140)···746699{747700 int pwr_ctrl = 0;748701 int ret = 0;749749- struct snd_soc_component *component = adcx140->component;750702751703 if (power_state)752704 pwr_ctrl = ADCX140_PWR_CFG_ADC_PDZ | ADCX140_PWR_CFG_PLL_PDZ;···757711 ret = regmap_write(adcx140->regmap, ADCX140_PHASE_CALIB,758712 adcx140->phase_calib_on ? 0x00 : 0x40);759713 if (ret)760760- dev_err(component->dev, "%s: register write error %d\n",714714+ dev_err(adcx140->dev, "%s: register write error %d\n",761715 __func__, ret);762716 }763717···773727 struct adcx140_priv *adcx140 = snd_soc_component_get_drvdata(component);774728 u8 data = 0;775729776776- switch (params_width(params)) {730730+ switch (params_physical_width(params)) {777731 case 16:778732 data = ADCX140_16_BIT_WORD;779733 break;···788742 break;789743 default:790744 dev_err(component->dev, "%s: Unsupported width %d\n",791791- __func__, params_width(params));745745+ __func__, params_physical_width(params));792746 return -EINVAL;793747 }794748···11211075 return ret;11221076}1123107710781078+static int adcx140_pwr_off(struct adcx140_priv *adcx140)10791079+{10801080+ int ret;10811081+10821082+ regcache_cache_only(adcx140->regmap, true);10831083+ regcache_mark_dirty(adcx140->regmap);10841084+10851085+ /* Assert the reset GPIO */10861086+ gpiod_set_value_cansleep(adcx140->gpio_reset, 0);10871087+10881088+ /*10891089+ * Datasheet - TLV320ADC3140 Rev. B, TLV320ADC5140 Rev. A,10901090+ * TLV320ADC6140 Rev. A 8.4.1:10911091+ * wait for hw shutdown (25ms) + >= 1ms10921092+ */10931093+ usleep_range(30000, 100000);10941094+10951095+ /* Power off the regulators, `avdd` and `iovdd` */10961096+ ret = regulator_bulk_disable(ARRAY_SIZE(adcx140->supplies),10971097+ adcx140->supplies);10981098+ if (ret) {10991099+ dev_err(adcx140->dev, "Failed to disable supplies: %d\n", ret);11001100+ return ret;11011101+ }11021102+11031103+ return 0;11041104+}11051105+11061106+static int adcx140_pwr_on(struct adcx140_priv *adcx140)11071107+{11081108+ int ret;11091109+11101110+ /* Power on the regulators, `avdd` and `iovdd` */11111111+ ret = regulator_bulk_enable(ARRAY_SIZE(adcx140->supplies),11121112+ adcx140->supplies);11131113+ if (ret) {11141114+ dev_err(adcx140->dev, "Failed to enable supplies: %d\n", ret);11151115+ return ret;11161116+ }11171117+11181118+ /* De-assert the reset GPIO */11191119+ gpiod_set_value_cansleep(adcx140->gpio_reset, 1);11201120+11211121+ /*11221122+ * Datasheet - TLV320ADC3140 Rev. B, TLV320ADC5140 Rev. A,11231123+ * TLV320ADC6140 Rev. A 8.4.2:11241124+ * wait >= 10 ms after entering sleep mode.11251125+ */11261126+ usleep_range(10000, 100000);11271127+11281128+ regcache_cache_only(adcx140->regmap, false);11291129+11301130+ /* Flush the regcache */11311131+ ret = regcache_sync(adcx140->regmap);11321132+ if (ret) {11331133+ dev_err(adcx140->dev, "Failed to restore register map: %d\n",11341134+ ret);11351135+ return ret;11361136+ }11371137+11381138+ return 0;11391139+}11401140+11241141static int adcx140_set_bias_level(struct snd_soc_component *component,11251142 enum snd_soc_bias_level level)11261143{11271144 struct adcx140_priv *adcx140 = snd_soc_component_get_drvdata(component);11451145+ enum snd_soc_bias_level prev_level11461146+ = snd_soc_component_get_bias_level(component);1128114711291148 switch (level) {11301149 case SND_SOC_BIAS_ON:11311150 case SND_SOC_BIAS_PREPARE:11511151+ if (prev_level == SND_SOC_BIAS_STANDBY)11521152+ adcx140_pwr_ctrl(adcx140, true);11531153+ break;11321154 case SND_SOC_BIAS_STANDBY:11331133- adcx140_pwr_ctrl(adcx140, true);11551155+ if (prev_level == SND_SOC_BIAS_PREPARE)11561156+ adcx140_pwr_ctrl(adcx140, false);11571157+ if (prev_level == SND_SOC_BIAS_OFF)11581158+ return adcx140_pwr_on(adcx140);11341159 break;11351160 case SND_SOC_BIAS_OFF:11361136- adcx140_pwr_ctrl(adcx140, false);11611161+ if (prev_level == SND_SOC_BIAS_STANDBY)11621162+ return adcx140_pwr_off(adcx140);11371163 break;11381164 }11391165···12711153 adcx140->phase_calib_on = false;12721154 adcx140->dev = &i2c->dev;1273115511561156+ for (int i = 0; i < ADCX140_NUM_SUPPLIES; i++)11571157+ adcx140->supplies[i].supply = adcx140_supply_names[i];11581158+11591159+ ret = devm_regulator_bulk_get(&i2c->dev, ADCX140_NUM_SUPPLIES,11601160+ adcx140->supplies);11611161+ if (ret) {11621162+ dev_err_probe(&i2c->dev, ret, "Failed to request supplies\n");11631163+ return ret;11641164+ }11651165+12741166 adcx140->gpio_reset = devm_gpiod_get_optional(adcx140->dev,12751167 "reset", GPIOD_OUT_LOW);12761168 if (IS_ERR(adcx140->gpio_reset))11691169+ return dev_err_probe(&i2c->dev, PTR_ERR(adcx140->gpio_reset),11701170+ "Failed to get Reset GPIO\n");11711171+ if (!adcx140->gpio_reset)12771172 dev_info(&i2c->dev, "Reset GPIO not defined\n");1278117312791174 adcx140->supply_areg = devm_regulator_get_optional(adcx140->dev,···13151184 ret);13161185 return ret;13171186 }11871187+11881188+ regcache_cache_only(adcx140->regmap, true);1318118913191190 i2c_set_clientdata(i2c, adcx140);13201191
-5
sound/soc/codecs/wcd937x.c
···27632763 wcd937x->sdw_priv[AIF1_CAP] = dev_get_drvdata(wcd937x->txdev);27642764 wcd937x->sdw_priv[AIF1_CAP]->wcd937x = wcd937x;27652765 wcd937x->tx_sdw_dev = dev_to_sdw_dev(wcd937x->txdev);27662766- if (!wcd937x->tx_sdw_dev) {27672767- dev_err(dev, "could not get txslave with matching of dev\n");27682768- ret = -EINVAL;27692769- goto err_put_txdev;27702770- }2771276627722767 /*27732768 * As TX is the main CSR reg interface, which should not be suspended first.
···44 *55 * Builtin list command: list all event types66 *77- * Copyright (C) 2009, Thomas Gleixner <tglx@linutronix.de>77+ * Copyright (C) 2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>88 * Copyright (C) 2008-2009, Red Hat Inc, Ingo Molnar <mingo@redhat.com>99 * Copyright (C) 2011, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com>1010 */
···5252 ip netns del nssv5353}54545555+is_carrier_up()5656+{5757+ local netns="$1"5858+ local nsim_dev="$2"5959+6060+ test "$(ip netns exec "$netns" \6161+ cat /sys/class/net/"$nsim_dev"/carrier 2>/dev/null)" -eq 16262+}6363+6464+assert_carrier_up()6565+{6666+ local netns="$1"6767+ local nsim_dev="$2"6868+6969+ if ! is_carrier_up "$netns" "$nsim_dev"; then7070+ echo "$nsim_dev's carrier should be UP, but it isn't"7171+ cleanup_ns7272+ exit 17373+ fi7474+}7575+7676+assert_carrier_down()7777+{7878+ local netns="$1"7979+ local nsim_dev="$2"8080+8181+ if is_carrier_up "$netns" "$nsim_dev"; then8282+ echo "$nsim_dev's carrier should be DOWN, but it isn't"8383+ cleanup_ns8484+ exit 18585+ fi8686+}8787+5588###5689### Code start5790###···145112 cleanup_ns146113 exit 1147114fi115115+116116+# netdevsim carrier state consistency checking117117+assert_carrier_up nssv "$NSIM_DEV_1_NAME"118118+assert_carrier_up nscl "$NSIM_DEV_2_NAME"119119+120120+echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX" > "$NSIM_DEV_SYS_UNLINK"121121+122122+assert_carrier_down nssv "$NSIM_DEV_1_NAME"123123+assert_carrier_down nscl "$NSIM_DEV_2_NAME"124124+125125+ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" down126126+ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" up127127+128128+assert_carrier_down nssv "$NSIM_DEV_1_NAME"129129+assert_carrier_down nscl "$NSIM_DEV_2_NAME"130130+131131+echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX $NSIM_DEV_2_FD:$NSIM_DEV_2_IFIDX" > $NSIM_DEV_SYS_LINK132132+133133+assert_carrier_up nssv "$NSIM_DEV_1_NAME"134134+assert_carrier_up nscl "$NSIM_DEV_2_NAME"135135+136136+ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" down137137+ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" up138138+139139+assert_carrier_up nssv "$NSIM_DEV_1_NAME"140140+assert_carrier_up nscl "$NSIM_DEV_2_NAME"148141149142# send/recv packets150143
···8989 # The id must be four bytes, test that 3 bytes fails a write9090 if echo -n abc > ./trace_marker_raw ; then9191 echo "Too small of write expected to fail but did not"9292+ echo ${ORIG} > buffer_size_kb9293 exit_fail9394 fi9495···10099101100 if write_buffer 0xdeadbeef $size ; then102101 echo "Too big of write expected to fail but did not"102102+ echo ${ORIG} > buffer_size_kb103103 exit_fail104104 fi105105}106106107107+ORIG=`cat buffer_size_kb`108108+109109+# test_multiple_writes test needs at least 12KB buffer110110+NEW_SIZE=12111111+112112+if [ ${ORIG} -lt ${NEW_SIZE} ]; then113113+ echo ${NEW_SIZE} > buffer_size_kb114114+fi115115+107116test_buffer108108-test_multiple_writes117117+if ! test_multiple_writes; then118118+ echo ${ORIG} > buffer_size_kb119119+ exit_fail120120+fi121121+122122+echo ${ORIG} > buffer_size_kb
···55# Copyright (c) 2017 Benjamin Tissoires <benjamin.tissoires@gmail.com>66# Copyright (c) 2017 Red Hat, Inc.7788+from packaging.version import Version89import platform910import pytest1011import re···1312import subprocess1413from .base import HIDTestUdevRule1514from pathlib import Path1515+1616+1717+@pytest.fixture(autouse=True)1818+def hidtools_version_check():1919+ HIDTOOLS_VERSION = "0.12"2020+ try:2121+ import hidtools2222+2323+ version = hidtools.__version__ # type: ignore2424+ if Version(version) < Version(HIDTOOLS_VERSION):2525+ pytest.skip(reason=f"have hidtools {version}, require >={HIDTOOLS_VERSION}")2626+ except Exception:2727+ pytest.skip(reason=f"hidtools >={HIDTOOLS_VERSION} required")162817291830# See the comment in HIDTestUdevRule, this doesn't set up but it will clean
···2929 net6_port_net6_port net_port_mac_proto_net"30303131# Reported bugs, also described by TYPE_ variables below3232-BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate"3232+BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate insert_overlap"33333434# List of possible paths to pktgen script from kernel tree for performance tests3535PKTGEN_SCRIPT_PATHS="···410410411411TYPE_doublecreate="412412display cannot create same element twice413413+type_spec ipv4_addr . ipv4_addr414414+chain_spec ip saddr . ip daddr415415+dst addr4416416+proto icmp417417+418418+race_repeat 0419419+420420+perf_duration 0421421+"422422+423423+TYPE_insert_overlap="424424+display reject overlapping range on add413425type_spec ipv4_addr . ipv4_addr414426chain_spec ip saddr . ip daddr415427dst addr4···19621950 err "Could not flush and re-create element in one transaction"19631951 return 119641952 fi19531953+19541954+ return 019551955+}19561956+19571957+add_fail()19581958+{19591959+ if nft add element inet filter test "$1" 2>/dev/null ; then19601960+ err "Returned success for add ${1} given set:"19611961+ err "$(nft -a list set inet filter test )"19621962+ return 119631963+ fi19641964+19651965+ return 019661966+}19671967+19681968+test_bug_insert_overlap()19691969+{19701970+ local elements="1.2.3.4 . 1.2.4.1"19711971+19721972+ setup veth send_"${proto}" set || return ${ksft_skip}19731973+19741974+ add "{ $elements }" || return 119751975+19761976+ elements="1.2.3.0-1.2.3.4 . 1.2.4.1"19771977+ add_fail "{ $elements }" || return 119781978+19791979+ elements="1.2.3.0-1.2.3.4 . 1.2.4.2"19801980+ add "{ $elements }" || return 119811981+19821982+ elements="1.2.3.4 . 1.2.4.1-1.2.4.2"19831983+ add_fail "{ $elements }" || return 11965198419661985 return 019671986}