···11What: /sys/kernel/time/aux_clocks/<ID>/enable22Date: May 202533-Contact: Thomas Gleixner <tglx@linutronix.de>33+Contact: Thomas Gleixner <tglx@kernel.org>44Description:55 Controls the enablement of auxiliary clock timekeepers.
+2-2
Documentation/ABI/testing/sysfs-devices-soc
···1717contact: Lee Jones <lee@kernel.org>1818Description:1919 Read-only attribute common to all SoCs. Contains the SoC machine2020- name (e.g. Ux500).2020+ name (e.g. DB8500).21212222What: /sys/devices/socX/family2323Date: January 20122424contact: Lee Jones <lee@kernel.org>2525Description:2626 Read-only attribute common to all SoCs. Contains SoC family name2727- (e.g. DB8500).2727+ (e.g. ux500).28282929 On many of ARM based silicon with SMCCC v1.2+ compliant firmware3030 this will contain the JEDEC JEP106 manufacturer’s identification
+8
Documentation/admin-guide/sysctl/net.rst
···303303Maximum number of packets, queued on the INPUT side, when the interface304304receives packets faster than kernel can process them.305305306306+qdisc_max_burst307307+------------------308308+309309+Maximum number of packets that can be temporarily stored before310310+reaching qdisc.311311+312312+Default: 1000313313+306314netdev_rss_key307315--------------308316
+1-1
Documentation/arch/x86/topology.rst
···1717Needless to say, code should use the generic functions - this file is *only*1818here to *document* the inner workings of x86 topology.19192020-Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>.2020+Started by Thomas Gleixner <tglx@kernel.org> and Borislav Petkov <bp@alien8.de>.21212222The main aim of the topology facilities is to present adequate interfaces to2323code which needs to know/query/use the structure of the running system wrt
+1-1
Documentation/core-api/cpu_hotplug.rst
···88 Srivatsa Vaddagiri <vatsa@in.ibm.com>,99 Ashok Raj <ashok.raj@intel.com>,1010 Joel Schopp <jschopp@austin.ibm.com>,1111- Thomas Gleixner <tglx@linutronix.de>1111+ Thomas Gleixner <tglx@kernel.org>12121313Introduction1414============
+1-1
Documentation/core-api/genericirq.rst
···439439440440The following people have contributed to this document:441441442442-1. Thomas Gleixner tglx@linutronix.de442442+1. Thomas Gleixner tglx@kernel.org4434434444442. Ingo Molnar mingo@elte.hu
+1-1
Documentation/core-api/librs.rst
···209209210210The following people have contributed to this document:211211212212-Thomas Gleixner\ tglx@linutronix.de212212+Thomas Gleixner\ tglx@kernel.org
···8899maintainers:1010 - Daniel Lezcano <daniel.lezcano@linaro.org>1111- - Thomas Gleixner <tglx@linutronix.de>1111+ - Thomas Gleixner <tglx@kernel.org>1212 - Rob Herring <robh@kernel.org>13131414properties:
···4848 enum: [1, 2]4949 default: 25050 description:5151- Number of lanes available per direction. Note that it is assume same5252- number of lanes is used both directions at once.5151+ Number of lanes available per direction. Note that it is assumed that5252+ the same number of lanes are used in both directions at once.53535454 vdd-hba-supply:5555 description:
+2-2
Documentation/driver-api/mtdnand.rst
···9969969979972. David Woodhouse\ dwmw2@infradead.org998998999999-3. Thomas Gleixner\ tglx@linutronix.de999999+3. Thomas Gleixner\ tglx@kernel.org1000100010011001A lot of users have provided bugfixes, improvements and helping hands10021002for testing. Thanks a lot.1003100310041004The following people have contributed to this document:1005100510061006-1. Thomas Gleixner\ tglx@linutronix.de10061006+1. Thomas Gleixner\ tglx@kernel.org
+1
Documentation/filesystems/locking.rst
···416416lm_breaker_owns_lease: yes no no417417lm_lock_expirable yes no no418418lm_expire_lock no no yes419419+lm_open_conflict yes no no419420====================== ============= ================= =========420421421422buffer_head
+6-4
Documentation/process/maintainer-soc.rst
···57575858All typical platform related patches should be sent via SoC submaintainers5959(platform-specific maintainers). This includes also changes to per-platform or6060-shared defconfigs (scripts/get_maintainer.pl might not provide correct6161-addresses in such case).6060+shared defconfigs. Note that scripts/get_maintainer.pl might not provide6161+correct addresses for the shared defconfig, so ignore its output and manually6262+create CC-list based on MAINTAINERS file or use something like6363+``scripts/get_maintainer.pl -f drivers/soc/FOO/``).62646365Submitting Patches to the Main SoC Maintainers6466~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~···116114Usually the branch that includes a driver change will also include the117115corresponding change to the devicetree binding description, to ensure they are118116in fact compatible. This means that the devicetree branch can end up causing119119-warnings in the "make dtbs_check" step. If a devicetree change depends on117117+warnings in the ``make dtbs_check`` step. If a devicetree change depends on120118missing additions to a header file in include/dt-bindings/, it will fail the121121-"make dtbs" step and not get merged.119119+``make dtbs`` step and not get merged.122120123121There are multiple ways to deal with this:124122
···404404405405感谢以下人士对本文档作出的贡献:406406407407-1. Thomas Gleixner tglx@linutronix.de407407+1. Thomas Gleixner tglx@kernel.org4084084094092. Ingo Molnar mingo@elte.hu
···11// SPDX-License-Identifier: (GPL-2.0 OR MIT)22/*33- * bcm2712-rpi-5-b-ovl-rp1.dts is the overlay-ready DT which will make44- * the RP1 driver to load the RP1 dtb overlay at runtime, while55- * bcm2712-rpi-5-b.dts (this file) is the fully defined one (i.e. it66- * already contains RP1 node, so no overlay is loaded nor needed).77- * This file is intended to host the override nodes for the RP1 peripherals,88- * e.g. to declare the phy of the ethernet interface or the custom pin setup99- * for several RP1 peripherals.1010- * This in turn is due to the fact that there's no current generic1111- * infrastructure to reference nodes (i.e. the nodes in rp1-common.dtsi) that1212- * are not yet defined in the DT since they are loaded at runtime via overlay.33+ * As a loose attempt to separate RP1 customizations from SoC peripherals44+ * definitioni, this file is intended to host the override nodes for the RP155+ * peripherals, e.g. to declare the phy of the ethernet interface or custom66+ * pin setup.137 * All other nodes that do not have anything to do with RP1 should be added1414- * to the included bcm2712-rpi-5-b-ovl-rp1.dts instead.88+ * to the included bcm2712-rpi-5-b-base.dtsi instead.159 */16101711/dts-v1/;18121919-#include "bcm2712-rpi-5-b-ovl-rp1.dts"1313+#include "bcm2712-rpi-5-b-base.dtsi"20142115/ {2216 aliases {···1925};20262127&pcie2 {2222- #include "rp1-nexus.dtsi"2828+ pci@0,0 {2929+ reg = <0x0 0x0 0x0 0x0 0x0>;3030+ ranges;3131+ bus-range = <0 1>;3232+ device_type = "pci";3333+ #address-cells = <3>;3434+ #size-cells = <2>;3535+3636+ dev@0,0 {3737+ compatible = "pci1de4,1";3838+ reg = <0x10000 0x0 0x0 0x0 0x0>;3939+ ranges = <0x1 0x0 0x0 0x82010000 0x0 0x0 0x0 0x400000>;4040+ interrupt-controller;4141+ #interrupt-cells = <2>;4242+ #address-cells = <3>;4343+ #size-cells = <2>;4444+4545+ #include "rp1-common.dtsi"4646+ };4747+ };2348};24492550&rp1_eth {
···31313232endif33333434-ifdef CONFIG_RELOCATABLE3535-$(obj)/Image: vmlinux.unstripped FORCE3636-else3734$(obj)/Image: vmlinux FORCE3838-endif3935 $(call if_changed,objcopy)40364137$(obj)/Image.gz: $(obj)/Image FORCE
-2
arch/riscv/configs/nommu_k210_defconfig
···5555# CONFIG_HW_RANDOM is not set5656# CONFIG_DEVMEM is not set5757CONFIG_I2C=y5858-# CONFIG_I2C_COMPAT is not set5958CONFIG_I2C_CHARDEV=y6059# CONFIG_I2C_HELPER_AUTO is not set6160CONFIG_I2C_DESIGNWARE_CORE=y···8889# CONFIG_FRAME_POINTER is not set8990# CONFIG_DEBUG_MISC is not set9091CONFIG_PANIC_ON_OOPS=y9191-# CONFIG_SCHED_DEBUG is not set9292# CONFIG_RCU_TRACE is not set9393# CONFIG_FTRACE is not set9494# CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_k210_sdcard_defconfig
···8686# CONFIG_FRAME_POINTER is not set8787# CONFIG_DEBUG_MISC is not set8888CONFIG_PANIC_ON_OOPS=y8989-# CONFIG_SCHED_DEBUG is not set9089# CONFIG_RCU_TRACE is not set9190# CONFIG_FTRACE is not set9291# CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_virt_defconfig
···6666# CONFIG_MISC_FILESYSTEMS is not set6767CONFIG_LSM="[]"6868CONFIG_PRINTK_TIME=y6969-# CONFIG_SCHED_DEBUG is not set7069# CONFIG_RCU_TRACE is not set7170# CONFIG_FTRACE is not set7271# CONFIG_RUNTIME_TESTING_MENU is not set
···8585 int ret;86868787 ret = sbi_hsm_hart_stop();8888- pr_crit("Unable to stop the cpu %u (%d)\n", smp_processor_id(), ret);8888+ pr_crit("Unable to stop the cpu %d (%d)\n", smp_processor_id(), ret);8989}90909191static int sbi_cpu_is_stopped(unsigned int cpuid)
···2222 if (!h || kernel_len < sizeof(*h))2323 return -EINVAL;24242525- /* According to Documentation/riscv/boot-image-header.rst,2525+ /* According to Documentation/arch/riscv/boot-image-header.rst,2626 * use "magic2" field to check when version >= 0.2.2727 */2828
···339339340340 add_random_kstack_offset();341341342342- if (syscall >= 0 && syscall < NR_syscalls)342342+ if (syscall >= 0 && syscall < NR_syscalls) {343343+ syscall = array_index_nospec(syscall, NR_syscalls);343344 syscall_handler(regs, syscall);345345+ }344346345347 /*346348 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+2-4
arch/riscv/net/bpf_jit_comp64.c
···1133113311341134 store_args(nr_arg_slots, args_off, ctx);1135113511361136- /* skip to actual body of traced function */11371137- if (flags & BPF_TRAMP_F_ORIG_STACK)11381138- orig_call += RV_FENTRY_NINSNS * 4;11391139-11401136 if (flags & BPF_TRAMP_F_CALL_ORIG) {11411137 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);11421138 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx);···11671171 }1168117211691173 if (flags & BPF_TRAMP_F_CALL_ORIG) {11741174+ /* skip to actual body of traced function */11751175+ orig_call += RV_FENTRY_NINSNS * 4;11701176 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);11711177 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx);11721178 ret = emit_call((const u64)orig_call, true, ctx);
+1-1
arch/sh/kernel/perf_event.c
···77 * Heavily based on the x86 and PowerPC implementations.88 *99 * x86:1010- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>1010+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1111 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1212 * Copyright (C) 2009 Jaswinder Singh Rajput1313 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+23
arch/sparc/kernel/pci.c
···181181182182__setup("ofpci_debug=", ofpci_debug);183183184184+static void of_fixup_pci_pref(struct pci_dev *dev, int index,185185+ struct resource *res)186186+{187187+ struct pci_bus_region region;188188+189189+ if (!(res->flags & IORESOURCE_MEM_64))190190+ return;191191+192192+ if (!resource_size(res))193193+ return;194194+195195+ pcibios_resource_to_bus(dev->bus, ®ion, res);196196+ if (region.end <= ~((u32)0))197197+ return;198198+199199+ if (!(res->flags & IORESOURCE_PREFETCH)) {200200+ res->flags |= IORESOURCE_PREFETCH;201201+ pci_info(dev, "reg 0x%x: fixup: pref added to 64-bit resource\n",202202+ index);203203+ }204204+}205205+184206static unsigned long pci_parse_of_flags(u32 addr0)185207{186208 unsigned long flags = 0;···266244 res->end = op_res->end;267245 res->flags = flags;268246 res->name = pci_name(dev);247247+ of_fixup_pci_pref(dev, i, res);269248270249 pci_info(dev, "reg 0x%x: %pR\n", i, res);271250 }
+1-1
arch/sparc/kernel/perf_event.c
···66 * This code is based almost entirely upon the x86 perf event77 * code, which is:88 *99- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>99+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1010 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar1111 * Copyright (C) 2009 Jaswinder Singh Rajput1212 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+2
arch/x86/coco/sev/Makefile
···88# GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining99KASAN_SANITIZE_noinstr.o := n1010KCSAN_SANITIZE_noinstr.o := n1111+1212+GCOV_PROFILE_noinstr.o := n
+1-1
arch/x86/events/core.c
···11/*22 * Performance events x86 architecture code33 *44- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>44+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>55 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar66 * Copyright (C) 2009 Jaswinder Singh Rajput77 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1-1
arch/x86/events/perf_event.h
···11/*22 * Performance events x86 architecture header33 *44- * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>44+ * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>55 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar66 * Copyright (C) 2009 Jaswinder Singh Rajput77 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+29-3
arch/x86/kernel/fpu/core.c
···319319#ifdef CONFIG_X86_64320320void fpu_update_guest_xfd(struct fpu_guest *guest_fpu, u64 xfd)321321{322322+ struct fpstate *fpstate = guest_fpu->fpstate;323323+322324 fpregs_lock();323323- guest_fpu->fpstate->xfd = xfd;324324- if (guest_fpu->fpstate->in_use)325325- xfd_update_state(guest_fpu->fpstate);325325+326326+ /*327327+ * KVM's guest ABI is that setting XFD[i]=1 *can* immediately revert the328328+ * save state to its initial configuration. Likewise, KVM_GET_XSAVE does329329+ * the same as XSAVE and returns XSTATE_BV[i]=0 whenever XFD[i]=1.330330+ *331331+ * If the guest's FPU state is in hardware, just update XFD: the XSAVE332332+ * in fpu_swap_kvm_fpstate will clear XSTATE_BV[i] whenever XFD[i]=1.333333+ *334334+ * If however the guest's FPU state is NOT resident in hardware, clear335335+ * disabled components in XSTATE_BV now, or a subsequent XRSTOR will336336+ * attempt to load disabled components and generate #NM _in the host_.337337+ */338338+ if (xfd && test_thread_flag(TIF_NEED_FPU_LOAD))339339+ fpstate->regs.xsave.header.xfeatures &= ~xfd;340340+341341+ fpstate->xfd = xfd;342342+ if (fpstate->in_use)343343+ xfd_update_state(fpstate);344344+326345 fpregs_unlock();327346}328347EXPORT_SYMBOL_FOR_KVM(fpu_update_guest_xfd);···447428 }448429449430 if (ustate->xsave.header.xfeatures & ~xcr0)431431+ return -EINVAL;432432+433433+ /*434434+ * Disabled features must be in their initial state, otherwise XRSTOR435435+ * causes an exception.436436+ */437437+ if (WARN_ON_ONCE(ustate->xsave.header.xfeatures & kstate->xfd))450438 return -EINVAL;451439452440 /*
+16-3
arch/x86/kernel/kvm.c
···8989 struct swait_queue_head wq;9090 u32 token;9191 int cpu;9292+ bool dummy;9293};93949495static struct kvm_task_sleep_head {···121120 raw_spin_lock(&b->lock);122121 e = _find_apf_task(b, token);123122 if (e) {124124- /* dummy entry exist -> wake up was delivered ahead of PF */125125- hlist_del(&e->link);123123+ struct kvm_task_sleep_node *dummy = NULL;124124+125125+ /*126126+ * The entry can either be a 'dummy' entry (which is put on the127127+ * list when wake-up happens ahead of APF handling completion)128128+ * or a token from another task which should not be touched.129129+ */130130+ if (e->dummy) {131131+ hlist_del(&e->link);132132+ dummy = e;133133+ }134134+126135 raw_spin_unlock(&b->lock);127127- kfree(e);136136+ kfree(dummy);128137 return false;129138 }130139131140 n->token = token;132141 n->cpu = smp_processor_id();142142+ n->dummy = false;133143 init_swait_queue_head(&n->wq);134144 hlist_add_head(&n->link, &b->list);135145 raw_spin_unlock(&b->lock);···243231 }244232 dummy->token = token;245233 dummy->cpu = smp_processor_id();234234+ dummy->dummy = true;246235 init_swait_queue_head(&dummy->wq);247236 hlist_add_head(&dummy->link, &b->list);248237 dummy = NULL;
+1-1
arch/x86/kernel/x86_init.c
···11/*22- * Copyright (C) 2009 Thomas Gleixner <tglx@linutronix.de>22+ * Copyright (C) 2009 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>33 *44 * For licencing details see kernel-base/COPYING55 */
+9
arch/x86/kvm/x86.c
···58075807static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,58085808 struct kvm_xsave *guest_xsave)58095809{58105810+ union fpregs_state *xstate = (union fpregs_state *)guest_xsave->region;58115811+58105812 if (fpstate_is_confidential(&vcpu->arch.guest_fpu))58115813 return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;58145814+58155815+ /*58165816+ * For backwards compatibility, do not expect disabled features to be in58175817+ * their initial state. XSTATE_BV[i] must still be cleared whenever58185818+ * XFD[i]=1, or XRSTOR would cause a #NM.58195819+ */58205820+ xstate->xsave.header.xfeatures &= ~vcpu->arch.guest_fpu.fpstate->xfd;5812582158135822 return fpu_copy_uabi_to_guest_fpstate(&vcpu->arch.guest_fpu,58145823 guest_xsave->region,
+1-1
arch/x86/mm/pti.c
···1515 * Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at>1616 *1717 * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com>1818- * Mostly rewritten by Thomas Gleixner <tglx@linutronix.de> and1818+ * Mostly rewritten by Thomas Gleixner <tglx@kernel.org> and1919 * Andy Lutomirsky <luto@amacapital.net>2020 */2121#include <linux/kernel.h>
+18-5
block/blk-integrity.c
···140140bool blk_integrity_merge_rq(struct request_queue *q, struct request *req,141141 struct request *next)142142{143143+ struct bio_integrity_payload *bip, *bip_next;144144+143145 if (blk_integrity_rq(req) == 0 && blk_integrity_rq(next) == 0)144146 return true;145147146148 if (blk_integrity_rq(req) == 0 || blk_integrity_rq(next) == 0)147149 return false;148150149149- if (bio_integrity(req->bio)->bip_flags !=150150- bio_integrity(next->bio)->bip_flags)151151+ bip = bio_integrity(req->bio);152152+ bip_next = bio_integrity(next->bio);153153+ if (bip->bip_flags != bip_next->bip_flags)154154+ return false;155155+156156+ if (bip->bip_flags & BIP_CHECK_APPTAG &&157157+ bip->app_tag != bip_next->app_tag)151158 return false;152159153160 if (req->nr_integrity_segments + next->nr_integrity_segments >···170163bool blk_integrity_merge_bio(struct request_queue *q, struct request *req,171164 struct bio *bio)172165{166166+ struct bio_integrity_payload *bip, *bip_bio = bio_integrity(bio);173167 int nr_integrity_segs;174168175175- if (blk_integrity_rq(req) == 0 && bio_integrity(bio) == NULL)169169+ if (blk_integrity_rq(req) == 0 && bip_bio == NULL)176170 return true;177171178178- if (blk_integrity_rq(req) == 0 || bio_integrity(bio) == NULL)172172+ if (blk_integrity_rq(req) == 0 || bip_bio == NULL)179173 return false;180174181181- if (bio_integrity(req->bio)->bip_flags != bio_integrity(bio)->bip_flags)175175+ bip = bio_integrity(req->bio);176176+ if (bip->bip_flags != bip_bio->bip_flags)177177+ return false;178178+179179+ if (bip->bip_flags & BIP_CHECK_APPTAG &&180180+ bip->app_tag != bip_bio->app_tag)182181 return false;183182184183 nr_integrity_segs = blk_rq_count_integrity_sg(q, bio);
+1-2
block/blk-mq.c
···45534553 * Make sure reading the old queue_hw_ctx from other45544554 * context concurrently won't trigger uaf.45554555 */45564556- synchronize_rcu_expedited();45574557- kfree(hctxs);45564556+ kfree_rcu_mightsleep(hctxs);45584557 hctxs = new_hctxs;45594558 }45604559
+9-16
block/blk-rq-qos.h
···112112113113static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)114114{115115- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&116116- q->rq_qos)115115+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)117116 __rq_qos_cleanup(q->rq_qos, bio);118117}119118120119static inline void rq_qos_done(struct request_queue *q, struct request *rq)121120{122122- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&123123- q->rq_qos && !blk_rq_is_passthrough(rq))121121+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) &&122122+ q->rq_qos && !blk_rq_is_passthrough(rq))124123 __rq_qos_done(q->rq_qos, rq);125124}126125127126static inline void rq_qos_issue(struct request_queue *q, struct request *rq)128127{129129- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&130130- q->rq_qos)128128+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)131129 __rq_qos_issue(q->rq_qos, rq);132130}133131134132static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)135133{136136- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&137137- q->rq_qos)134134+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)138135 __rq_qos_requeue(q->rq_qos, rq);139136}140137···159162160163static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)161164{162162- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&163163- q->rq_qos) {165165+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) {164166 bio_set_flag(bio, BIO_QOS_THROTTLED);165167 __rq_qos_throttle(q->rq_qos, bio);166168 }···168172static inline void rq_qos_track(struct request_queue *q, struct request *rq,169173 struct bio *bio)170174{171171- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&172172- q->rq_qos)175175+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)173176 __rq_qos_track(q->rq_qos, rq, bio);174177}175178176179static inline void rq_qos_merge(struct request_queue *q, struct request *rq,177180 struct bio *bio)178181{179179- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&180180- q->rq_qos) {182182+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) {181183 bio_set_flag(bio, BIO_QOS_MERGED);182184 __rq_qos_merge(q->rq_qos, rq, bio);183185 }···183189184190static inline void rq_qos_queue_depth_changed(struct request_queue *q)185191{186186- if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&187187- q->rq_qos)192192+ if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)188193 __rq_qos_queue_depth_changed(q->rq_qos);189194}190195
+11-8
drivers/acpi/pci_irq.c
···188188 * the IRQ value, which is hardwired to specific interrupt inputs on189189 * the interrupt controller.190190 */191191- pr_debug("%04x:%02x:%02x[%c] -> %s[%d]\n",191191+ pr_debug("%04x:%02x:%02x[%c] -> %s[%u]\n",192192 entry->id.segment, entry->id.bus, entry->id.device,193193 pin_name(entry->pin), prt->source, entry->index);194194···384384int acpi_pci_irq_enable(struct pci_dev *dev)385385{386386 struct acpi_prt_entry *entry;387387- int gsi;387387+ u32 gsi;388388 u8 pin;389389 int triggering = ACPI_LEVEL_SENSITIVE;390390 /*···422422 return 0;423423 }424424425425+ rc = -ENODEV;426426+425427 if (entry) {426428 if (entry->link)427427- gsi = acpi_pci_link_allocate_irq(entry->link,429429+ rc = acpi_pci_link_allocate_irq(entry->link,428430 entry->index,429431 &triggering, &polarity,430430- &link);431431- else432432+ &link, &gsi);433433+ else {432434 gsi = entry->index;433433- } else434434- gsi = -1;435435+ rc = 0;436436+ }437437+ }435438436436- if (gsi < 0) {439439+ if (rc < 0) {437440 /*438441 * No IRQ known to the ACPI subsystem - maybe the BIOS /439442 * driver reported one, then use it. Exit in any case.
+25-14
drivers/acpi/pci_link.c
···448448 /* >IRQ15 */449449};450450451451-static int acpi_irq_pci_sharing_penalty(int irq)451451+static int acpi_irq_pci_sharing_penalty(u32 irq)452452{453453 struct acpi_pci_link *link;454454 int penalty = 0;···474474 return penalty;475475}476476477477-static int acpi_irq_get_penalty(int irq)477477+static int acpi_irq_get_penalty(u32 irq)478478{479479 int penalty = 0;480480···528528static int acpi_pci_link_allocate(struct acpi_pci_link *link)529529{530530 acpi_handle handle = link->device->handle;531531- int irq;531531+ u32 irq;532532 int i;533533534534 if (link->irq.initialized) {···598598 return 0;599599}600600601601-/*602602- * acpi_pci_link_allocate_irq603603- * success: return IRQ >= 0604604- * failure: return -1601601+/**602602+ * acpi_pci_link_allocate_irq(): Retrieve a link device GSI603603+ *604604+ * @handle: Handle for the link device605605+ * @index: GSI index606606+ * @triggering: pointer to store the GSI trigger607607+ * @polarity: pointer to store GSI polarity608608+ * @name: pointer to store link device name609609+ * @gsi: pointer to store GSI number610610+ *611611+ * Returns:612612+ * 0 on success with @triggering, @polarity, @name, @gsi initialized.613613+ * -ENODEV on failure605614 */606615int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering,607607- int *polarity, char **name)616616+ int *polarity, char **name, u32 *gsi)608617{609618 struct acpi_device *device = acpi_fetch_acpi_dev(handle);610619 struct acpi_pci_link *link;611620612621 if (!device) {613622 acpi_handle_err(handle, "Invalid link device\n");614614- return -1;623623+ return -ENODEV;615624 }616625617626 link = acpi_driver_data(device);618627 if (!link) {619628 acpi_handle_err(handle, "Invalid link context\n");620620- return -1;629629+ return -ENODEV;621630 }622631623632 /* TBD: Support multiple index (IRQ) entries per Link Device */624633 if (index) {625634 acpi_handle_err(handle, "Invalid index %d\n", index);626626- return -1;635635+ return -ENODEV;627636 }628637629638 mutex_lock(&acpi_link_lock);630639 if (acpi_pci_link_allocate(link)) {631640 mutex_unlock(&acpi_link_lock);632632- return -1;641641+ return -ENODEV;633642 }634643635644 if (!link->irq.active) {636645 mutex_unlock(&acpi_link_lock);637646 acpi_handle_err(handle, "Link active IRQ is 0!\n");638638- return -1;647647+ return -ENODEV;639648 }640649 link->refcnt++;641650 mutex_unlock(&acpi_link_lock);···656647 if (name)657648 *name = acpi_device_bid(link->device);658649 acpi_handle_debug(handle, "Link is referenced\n");659659- return link->irq.active;650650+ *gsi = link->irq.active;651651+652652+ return 0;660653}661654662655/*
-3
drivers/android/binder/page_range.rs
···727727 drop(mm);728728 drop(page);729729730730- // SAFETY: We just unlocked the lru lock, but it should be locked when we return.731731- unsafe { bindings::spin_lock(&raw mut (*lru).lock) };732732-733730 LRU_REMOVED_ENTRY734731}
···943943 DECLARE_BITMAP(old_stat, MAX_LINE);944944 DECLARE_BITMAP(cur_stat, MAX_LINE);945945 DECLARE_BITMAP(new_stat, MAX_LINE);946946+ DECLARE_BITMAP(int_stat, MAX_LINE);946947 DECLARE_BITMAP(trigger, MAX_LINE);947948 DECLARE_BITMAP(edges, MAX_LINE);948949 int ret;949950951951+ if (chip->driver_data & PCA_PCAL) {952952+ /* Read INT_STAT before it is cleared by the input-port read. */953953+ ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, int_stat);954954+ if (ret)955955+ return false;956956+ }957957+950958 ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);951959 if (ret)952960 return false;961961+962962+ if (chip->driver_data & PCA_PCAL) {963963+ /* Detect short pulses via INT_STAT. */964964+ bitmap_and(trigger, int_stat, chip->irq_mask, gc->ngpio);965965+966966+ /* Apply filter for rising/falling edge selection. */967967+ bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise,968968+ cur_stat, gc->ngpio);969969+970970+ bitmap_and(int_stat, new_stat, trigger, gc->ngpio);971971+ } else {972972+ bitmap_zero(int_stat, gc->ngpio);973973+ }953974954975 /* Remove output pins from the equation */955976 pca953x_read_regs(chip, chip->regs->direction, reg_direction);···985964986965 if (bitmap_empty(chip->irq_trig_level_high, gc->ngpio) &&987966 bitmap_empty(chip->irq_trig_level_low, gc->ngpio)) {988988- if (bitmap_empty(trigger, gc->ngpio))967967+ if (bitmap_empty(trigger, gc->ngpio) &&968968+ bitmap_empty(int_stat, gc->ngpio))989969 return false;990970 }991971···994972 bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio);995973 bitmap_or(edges, old_stat, cur_stat, gc->ngpio);996974 bitmap_and(pending, edges, trigger, gc->ngpio);975975+ bitmap_or(pending, pending, int_stat, gc->ngpio);997976998977 bitmap_and(cur_stat, new_stat, chip->irq_trig_level_high, gc->ngpio);999978 bitmap_and(cur_stat, cur_stat, chip->irq_mask, gc->ngpio);
+1
drivers/gpio/gpio-rockchip.c
···593593 gc->ngpio = bank->nr_pins;594594 gc->label = bank->name;595595 gc->parent = bank->dev;596596+ gc->can_sleep = true;596597597598 ret = gpiochip_add_data(gc, bank);598599 if (ret) {
+180-71
drivers/gpio/gpiolib-shared.c
···3838 int dev_id;3939 /* Protects the auxiliary device struct and the lookup table. */4040 struct mutex lock;4141+ struct lock_class_key lock_key;4142 struct auxiliary_device adev;4243 struct gpiod_lookup_table *lookup;4444+ bool is_reset_gpio;4345};44464547/* Represents a single GPIO pin. */···7876 return NULL;7977}80787979+static struct gpio_shared_ref *gpio_shared_make_ref(struct fwnode_handle *fwnode,8080+ const char *con_id,8181+ enum gpiod_flags flags)8282+{8383+ char *con_id_cpy __free(kfree) = NULL;8484+8585+ struct gpio_shared_ref *ref __free(kfree) = kzalloc(sizeof(*ref), GFP_KERNEL);8686+ if (!ref)8787+ return NULL;8888+8989+ if (con_id) {9090+ con_id_cpy = kstrdup(con_id, GFP_KERNEL);9191+ if (!con_id_cpy)9292+ return NULL;9393+ }9494+9595+ ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL);9696+ if (ref->dev_id < 0)9797+ return NULL;9898+9999+ ref->flags = flags;100100+ ref->con_id = no_free_ptr(con_id_cpy);101101+ ref->fwnode = fwnode;102102+ lockdep_register_key(&ref->lock_key);103103+ mutex_init_with_key(&ref->lock, &ref->lock_key);104104+105105+ return no_free_ptr(ref);106106+}107107+108108+static int gpio_shared_setup_reset_proxy(struct gpio_shared_entry *entry,109109+ enum gpiod_flags flags)110110+{111111+ struct gpio_shared_ref *ref;112112+113113+ list_for_each_entry(ref, &entry->refs, list) {114114+ if (ref->is_reset_gpio)115115+ /* Already set-up. */116116+ return 0;117117+ }118118+119119+ ref = gpio_shared_make_ref(NULL, "reset", flags);120120+ if (!ref)121121+ return -ENOMEM;122122+123123+ ref->is_reset_gpio = true;124124+125125+ list_add_tail(&ref->list, &entry->refs);126126+127127+ pr_debug("Created a secondary shared GPIO reference for potential reset-gpio device for GPIO %u at %s\n",128128+ entry->offset, fwnode_get_name(entry->fwnode));129129+130130+ return 0;131131+}132132+81133/* Handle all special nodes that we should ignore. */82134static bool gpio_shared_of_node_ignore(struct device_node *node)83135{···162106 size_t con_id_len, suffix_len;163107 struct fwnode_handle *fwnode;164108 struct of_phandle_args args;109109+ struct gpio_shared_ref *ref;165110 struct property *prop;166111 unsigned int offset;167112 const char *suffix;···195138196139 for (i = 0; i < count; i++) {197140 struct device_node *np __free(device_node) = NULL;141141+ char *con_id __free(kfree) = NULL;198142199143 ret = of_parse_phandle_with_args(curr, prop->name,200144 "#gpio-cells", i,···240182 list_add_tail(&entry->list, &gpio_shared_list);241183 }242184243243- struct gpio_shared_ref *ref __free(kfree) =244244- kzalloc(sizeof(*ref), GFP_KERNEL);245245- if (!ref)246246- return -ENOMEM;247247-248248- ref->fwnode = fwnode_handle_get(of_fwnode_handle(curr));249249- ref->flags = args.args[1];250250- mutex_init(&ref->lock);251251-252185 if (strends(prop->name, "gpios"))253186 suffix = "-gpios";254187 else if (strends(prop->name, "gpio"))···251202252203 /* We only set con_id if there's actually one. */253204 if (strcmp(prop->name, "gpios") && strcmp(prop->name, "gpio")) {254254- ref->con_id = kstrdup(prop->name, GFP_KERNEL);255255- if (!ref->con_id)205205+ con_id = kstrdup(prop->name, GFP_KERNEL);206206+ if (!con_id)256207 return -ENOMEM;257208258258- con_id_len = strlen(ref->con_id);209209+ con_id_len = strlen(con_id);259210 suffix_len = strlen(suffix);260211261261- ref->con_id[con_id_len - suffix_len] = '\0';212212+ con_id[con_id_len - suffix_len] = '\0';262213 }263214264264- ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL);265265- if (ref->dev_id < 0) {266266- kfree(ref->con_id);215215+ ref = gpio_shared_make_ref(fwnode_handle_get(of_fwnode_handle(curr)),216216+ con_id, args.args[1]);217217+ if (!ref)267218 return -ENOMEM;268268- }269219270220 if (!list_empty(&entry->refs))271221 pr_debug("GPIO %u at %s is shared by multiple firmware nodes\n",272222 entry->offset, fwnode_get_name(entry->fwnode));273223274274- list_add_tail(&no_free_ptr(ref)->list, &entry->refs);224224+ list_add_tail(&ref->list, &entry->refs);225225+226226+ if (strcmp(prop->name, "reset-gpios") == 0) {227227+ ret = gpio_shared_setup_reset_proxy(entry, args.args[1]);228228+ if (ret)229229+ return ret;230230+ }275231 }276232 }277233···360306 struct fwnode_handle *reset_fwnode = dev_fwnode(consumer);361307 struct fwnode_reference_args ref_args, aux_args;362308 struct device *parent = consumer->parent;309309+ struct gpio_shared_ref *real_ref;363310 bool match;364311 int ret;365312313313+ lockdep_assert_held(&ref->lock);314314+366315 /* The reset-gpio device must have a parent AND a firmware node. */367316 if (!parent || !reset_fwnode)368368- return false;369369-370370- /*371371- * FIXME: use device_is_compatible() once the reset-gpio drivers gains372372- * a compatible string which it currently does not have.373373- */374374- if (!strstarts(dev_name(consumer), "reset.gpio."))375317 return false;376318377319 /*···378328 return false;379329380330 /*381381- * The device associated with the shared reference's firmware node is382382- * the consumer of the reset control exposed by the reset-gpio device.383383- * It must have a "reset-gpios" property that's referencing the entry's384384- * firmware node.385385- *386386- * The reference args must agree between the real consumer and the387387- * auxiliary reset-gpio device.331331+ * Now we need to find the actual pin we want to assign to this332332+ * reset-gpio device. To that end: iterate over the list of references333333+ * of this entry and see if there's one, whose reset-gpios property's334334+ * arguments match the ones from this consumer's node.388335 */389389- ret = fwnode_property_get_reference_args(ref->fwnode, "reset-gpios",390390- NULL, 2, 0, &ref_args);391391- if (ret)392392- return false;336336+ list_for_each_entry(real_ref, &entry->refs, list) {337337+ if (real_ref == ref)338338+ continue;393339394394- ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios",395395- NULL, 2, 0, &aux_args);396396- if (ret) {340340+ guard(mutex)(&real_ref->lock);341341+342342+ if (!real_ref->fwnode)343343+ continue;344344+345345+ /*346346+ * The device associated with the shared reference's firmware347347+ * node is the consumer of the reset control exposed by the348348+ * reset-gpio device. It must have a "reset-gpios" property349349+ * that's referencing the entry's firmware node.350350+ *351351+ * The reference args must agree between the real consumer and352352+ * the auxiliary reset-gpio device.353353+ */354354+ ret = fwnode_property_get_reference_args(real_ref->fwnode,355355+ "reset-gpios",356356+ NULL, 2, 0, &ref_args);357357+ if (ret)358358+ continue;359359+360360+ ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios",361361+ NULL, 2, 0, &aux_args);362362+ if (ret) {363363+ fwnode_handle_put(ref_args.fwnode);364364+ continue;365365+ }366366+367367+ match = ((ref_args.fwnode == entry->fwnode) &&368368+ (aux_args.fwnode == entry->fwnode) &&369369+ (ref_args.args[0] == aux_args.args[0]));370370+397371 fwnode_handle_put(ref_args.fwnode);398398- return false;372372+ fwnode_handle_put(aux_args.fwnode);373373+374374+ if (!match)375375+ continue;376376+377377+ /*378378+ * Reuse the fwnode of the real device, next time we'll use it379379+ * in the normal path.380380+ */381381+ ref->fwnode = fwnode_handle_get(reset_fwnode);382382+ return true;399383 }400384401401- match = ((ref_args.fwnode == entry->fwnode) &&402402- (aux_args.fwnode == entry->fwnode) &&403403- (ref_args.args[0] == aux_args.args[0]));404404-405405- fwnode_handle_put(ref_args.fwnode);406406- fwnode_handle_put(aux_args.fwnode);407407- return match;385385+ return false;408386}409387#else410388static bool gpio_shared_dev_is_reset_gpio(struct device *consumer,···443365}444366#endif /* CONFIG_RESET_GPIO */445367446446-int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags)368368+int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id,369369+ unsigned long lflags)447370{448371 const char *dev_id = dev_name(consumer);372372+ struct gpiod_lookup_table *lookup;449373 struct gpio_shared_entry *entry;450374 struct gpio_shared_ref *ref;451375452452- struct gpiod_lookup_table *lookup __free(kfree) =453453- kzalloc(struct_size(lookup, table, 2), GFP_KERNEL);454454- if (!lookup)455455- return -ENOMEM;456456-457376 list_for_each_entry(entry, &gpio_shared_list, list) {458377 list_for_each_entry(ref, &entry->refs, list) {459459- if (!device_match_fwnode(consumer, ref->fwnode) &&460460- !gpio_shared_dev_is_reset_gpio(consumer, entry, ref))461461- continue;462462-463378 guard(mutex)(&ref->lock);379379+380380+ /*381381+ * FIXME: use device_is_compatible() once the reset-gpio382382+ * drivers gains a compatible string which it currently383383+ * does not have.384384+ */385385+ if (!ref->fwnode && strstarts(dev_name(consumer), "reset.gpio.")) {386386+ if (!gpio_shared_dev_is_reset_gpio(consumer, entry, ref))387387+ continue;388388+ } else if (!device_match_fwnode(consumer, ref->fwnode)) {389389+ continue;390390+ }391391+392392+ if ((!con_id && ref->con_id) || (con_id && !ref->con_id) ||393393+ (con_id && ref->con_id && strcmp(con_id, ref->con_id) != 0))394394+ continue;464395465396 /* We've already done that on a previous request. */466397 if (ref->lookup)···482395 if (!key)483396 return -ENOMEM;484397398398+ lookup = kzalloc(struct_size(lookup, table, 2), GFP_KERNEL);399399+ if (!lookup)400400+ return -ENOMEM;401401+485402 pr_debug("Adding machine lookup entry for a shared GPIO for consumer %s, with key '%s' and con_id '%s'\n",486403 dev_id, key, ref->con_id ?: "none");487404···493402 lookup->table[0] = GPIO_LOOKUP(no_free_ptr(key), 0,494403 ref->con_id, lflags);495404496496- ref->lookup = no_free_ptr(lookup);405405+ ref->lookup = lookup;497406 gpiod_add_lookup_table(ref->lookup);498407499408 return 0;···557466 entry->offset, gpio_device_get_label(gdev));558467559468 list_for_each_entry(ref, &entry->refs, list) {560560- pr_debug("Setting up a shared GPIO entry for %s\n",561561- fwnode_get_name(ref->fwnode));469469+ pr_debug("Setting up a shared GPIO entry for %s (con_id: '%s')\n",470470+ fwnode_get_name(ref->fwnode) ?: "(no fwnode)",471471+ ref->con_id ?: "(none)");562472563473 ret = gpio_shared_make_adev(gdev, entry, ref);564474 if (ret)···579487 if (!device_match_fwnode(&gdev->dev, entry->fwnode))580488 continue;581489582582- /*583583- * For some reason if we call synchronize_srcu() in GPIO core,584584- * descent here and take this mutex and then recursively call585585- * synchronize_srcu() again from gpiochip_remove() (which is586586- * totally fine) called after gpio_shared_remove_adev(),587587- * lockdep prints a false positive deadlock splat. Disable588588- * lockdep here.589589- */590590- lockdep_off();591490 list_for_each_entry(ref, &entry->refs, list) {592491 guard(mutex)(&ref->lock);593492···591508592509 gpio_shared_remove_adev(&ref->adev);593510 }594594- lockdep_on();595511 }596512}597513···686604{687605 list_del(&ref->list);688606 mutex_destroy(&ref->lock);607607+ lockdep_unregister_key(&ref->lock_key);689608 kfree(ref->con_id);690609 ida_free(&gpio_shared_ida, ref->dev_id);691610 fwnode_handle_put(ref->fwnode);···718635 }719636}720637638638+static bool gpio_shared_entry_is_really_shared(struct gpio_shared_entry *entry)639639+{640640+ size_t num_nodes = list_count_nodes(&entry->refs);641641+ struct gpio_shared_ref *ref;642642+643643+ if (num_nodes <= 1)644644+ return false;645645+646646+ if (num_nodes > 2)647647+ return true;648648+649649+ /* Exactly two references: */650650+ list_for_each_entry(ref, &entry->refs, list) {651651+ /*652652+ * Corner-case: the second reference comes from the potential653653+ * reset-gpio instance. However, this pin is not really shared654654+ * as it would have three references in this case. Avoid655655+ * creating unnecessary proxies.656656+ */657657+ if (ref->is_reset_gpio)658658+ return false;659659+ }660660+661661+ return true;662662+}663663+721664static void gpio_shared_free_exclusive(void)722665{723666 struct gpio_shared_entry *entry, *epos;724667725668 list_for_each_entry_safe(entry, epos, &gpio_shared_list, list) {726726- if (list_count_nodes(&entry->refs) > 1)669669+ if (gpio_shared_entry_is_really_shared(entry))727670 continue;728671729672 gpio_shared_drop_ref(list_first_entry(&entry->refs,
···3838struct isp_funcs {3939 int (*hw_init)(struct amdgpu_isp *isp);4040 int (*hw_fini)(struct amdgpu_isp *isp);4141+ int (*hw_suspend)(struct amdgpu_isp *isp);4242+ int (*hw_resume)(struct amdgpu_isp *isp);4143};42444345struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
···201201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?202202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN;203203 break;204204+ case AMDGPU_HW_IP_VPE:205205+ type = AMD_IP_BLOCK_TYPE_VPE;206206+ break;204207 default:205208 type = AMD_IP_BLOCK_TYPE_NUM;206209 break;···723720 break;724721 case AMD_IP_BLOCK_TYPE_UVD:725722 count = adev->uvd.num_uvd_inst;723723+ break;724724+ case AMD_IP_BLOCK_TYPE_VPE:725725+ count = adev->vpe.num_instances;726726 break;727727 /* For all other IP block types not listed in the switch statement728728 * the ip status is valid here and the instance count is one.
+6-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
···144144 struct amdgpu_ring *ring;145145 ktime_t start_timestamp;146146147147- /* wptr for the fence for resets */147147+ /* wptr for the total submission for resets */148148 u64 wptr;149149 /* fence context for resets */150150 u64 context;151151+ /* has this fence been reemitted */152152+ unsigned int reemitted;153153+ /* wptr for the fence for the submission */154154+ u64 fence_wptr_start;155155+ u64 fence_wptr_end;151156};152157153158extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
···2626 */27272828#include <linux/gpio/machine.h>2929+#include <linux/pm_runtime.h>2930#include "amdgpu.h"3031#include "isp_v4_1_1.h"3132···146145 return -ENODEV;147146 }148147148148+ /* The devices will be managed by the pm ops from the parent */149149+ dev_pm_syscore_device(dev, true);150150+149151exit:150152 /* Continue to add */151153 return 0;···181177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret);182178 return -ENODEV;183179 }180180+ dev_pm_syscore_device(dev, false);184181185182exit:186183 /* Continue to remove */187184 return 0;185185+}186186+187187+static int isp_suspend_device(struct device *dev, void *data)188188+{189189+ return pm_runtime_force_suspend(dev);190190+}191191+192192+static int isp_resume_device(struct device *dev, void *data)193193+{194194+ return pm_runtime_force_resume(dev);195195+}196196+197197+static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp)198198+{199199+ int r;200200+201201+ r = device_for_each_child(isp->parent, NULL,202202+ isp_suspend_device);203203+ if (r)204204+ dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r);205205+206206+ return r;207207+}208208+209209+static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp)210210+{211211+ int r;212212+213213+ r = device_for_each_child(isp->parent, NULL,214214+ isp_resume_device);215215+ if (r)216216+ dev_err(isp->parent, "failed to resume hw device (%d)\n", r);217217+218218+ return r;188219}189220190221static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp)···408369static const struct isp_funcs isp_v4_1_1_funcs = {409370 .hw_init = isp_v4_1_1_hw_init,410371 .hw_fini = isp_v4_1_1_hw_fini,372372+ .hw_suspend = isp_v4_1_1_hw_suspend,373373+ .hw_resume = isp_v4_1_1_hw_resume,411374};412375413376void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
···2143214321442144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset);21452145 if (!ret) {21462146- if (amdgpu_emu_mode == 1)21462146+ if (amdgpu_emu_mode == 1) {21472147 msleep(50000);21482148- else21482148+ } else {21492149+ /* disable mmio access while doing mode 1 reset*/21502150+ smu->adev->no_hw_access = true;21512151+ /* ensure no_hw_access is globally visible before any MMIO */21522152+ smp_mb();21492153 msleep(1000);21542154+ }21502155 }2151215621522157 return ret;
+99-23
drivers/gpu/drm/drm_atomic_helper.c
···11621162 new_state->self_refresh_active;11631163}1164116411651165-static void11661166-encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state)11651165+/**11661166+ * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder11671167+ * @dev: DRM device11681168+ * @state: the driver state object11691169+ *11701170+ * Loops over all connectors in the current state and if the CRTC needs11711171+ * it, disables the bridge chain all the way, then disables the encoder11721172+ * afterwards.11731173+ */11741174+void11751175+drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev,11761176+ struct drm_atomic_state *state)11671177{11681178 struct drm_connector *connector;11691179 struct drm_connector_state *old_conn_state, *new_conn_state;···12391229 }12401230 }12411231}12321232+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable);1242123312431243-static void12441244-crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)12341234+/**12351235+ * drm_atomic_helper_commit_crtc_disable - disable CRTSs12361236+ * @dev: DRM device12371237+ * @state: the driver state object12381238+ *12391239+ * Loops over all CRTCs in the current state and if the CRTC needs12401240+ * it, disables it.12411241+ */12421242+void12431243+drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state)12451244{12461245 struct drm_crtc *crtc;12471246 struct drm_crtc_state *old_crtc_state, *new_crtc_state;···13011282 drm_crtc_vblank_put(crtc);13021283 }13031284}12851285+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable);1304128613051305-static void13061306-encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)12871287+/**12881288+ * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges12891289+ * @dev: DRM device12901290+ * @state: the driver state object12911291+ *12921292+ * Loops over all connectors in the current state and if the CRTC needs12931293+ * it, post-disables all encoder bridges.12941294+ */12951295+void12961296+drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state)13071297{13081298 struct drm_connector *connector;13091299 struct drm_connector_state *old_conn_state, *new_conn_state;···13631335 drm_bridge_put(bridge);13641336 }13651337}13381338+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable);1366133913671340static void13681341disable_outputs(struct drm_device *dev, struct drm_atomic_state *state)13691342{13701370- encoder_bridge_disable(dev, state);13431343+ drm_atomic_helper_commit_encoder_bridge_disable(dev, state);1371134413721372- crtc_disable(dev, state);13451345+ drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state);1373134613741374- encoder_bridge_post_disable(dev, state);13471347+ drm_atomic_helper_commit_crtc_disable(dev, state);13751348}1376134913771350/**···14751446}14761447EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants);1477144814781478-static void14791479-crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)14491449+/**14501450+ * drm_atomic_helper_commit_crtc_set_mode - set the new mode14511451+ * @dev: DRM device14521452+ * @state: the driver state object14531453+ *14541454+ * Loops over all connectors in the current state and if the mode has14551455+ * changed, change the mode of the CRTC, then call down the bridge14561456+ * chain and change the mode in all bridges as well.14571457+ */14581458+void14591459+drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state)14801460{14811461 struct drm_crtc *crtc;14821462 struct drm_crtc_state *new_crtc_state;···15461508 drm_bridge_put(bridge);15471509 }15481510}15111511+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode);1549151215501513/**15511514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs···15701531 drm_atomic_helper_update_legacy_modeset_state(dev, state);15711532 drm_atomic_helper_calc_timestamping_constants(state);1572153315731573- crtc_set_mode(dev, state);15341534+ drm_atomic_helper_commit_crtc_set_mode(dev, state);15741535}15751536EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables);1576153715771577-static void drm_atomic_helper_commit_writebacks(struct drm_device *dev,15781578- struct drm_atomic_state *state)15381538+/**15391539+ * drm_atomic_helper_commit_writebacks - issue writebacks15401540+ * @dev: DRM device15411541+ * @state: atomic state object being committed15421542+ *15431543+ * This loops over the connectors, checks if the new state requires15441544+ * a writeback job to be issued and in that case issues an atomic15451545+ * commit on each connector.15461546+ */15471547+void drm_atomic_helper_commit_writebacks(struct drm_device *dev,15481548+ struct drm_atomic_state *state)15791549{15801550 struct drm_connector *connector;15811551 struct drm_connector_state *new_conn_state;···16031555 }16041556 }16051557}15581558+EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks);1606155916071607-static void16081608-encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)15601560+/**15611561+ * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges15621562+ * @dev: DRM device15631563+ * @state: atomic state object being committed15641564+ *15651565+ * This loops over the connectors and if the CRTC needs it, pre-enables15661566+ * the entire bridge chain.15671567+ */15681568+void15691569+drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state)16091570{16101571 struct drm_connector *connector;16111572 struct drm_connector_state *new_conn_state;···16451588 drm_bridge_put(bridge);16461589 }16471590}15911591+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable);1648159216491649-static void16501650-crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)15931593+/**15941594+ * drm_atomic_helper_commit_crtc_enable - enables the CRTCs15951595+ * @dev: DRM device15961596+ * @state: atomic state object being committed15971597+ *15981598+ * This loops over CRTCs in the new state, and of the CRTC needs15991599+ * it, enables it.16001600+ */16011601+void16021602+drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state)16511603{16521604 struct drm_crtc *crtc;16531605 struct drm_crtc_state *old_crtc_state;···16851619 }16861620 }16871621}16221622+EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable);1688162316891689-static void16901690-encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)16241624+/**16251625+ * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges16261626+ * @dev: DRM device16271627+ * @state: atomic state object being committed16281628+ *16291629+ * This loops over all connectors in the new state, and of the CRTC needs16301630+ * it, enables the entire bridge chain.16311631+ */16321632+void16331633+drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state)16911634{16921635 struct drm_connector *connector;16931636 struct drm_connector_state *new_conn_state;···17391664 drm_bridge_put(bridge);17401665 }17411666}16671667+EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable);1742166817431669/**17441670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs···17581682void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,17591683 struct drm_atomic_state *state)17601684{17611761- encoder_bridge_pre_enable(dev, state);16851685+ drm_atomic_helper_commit_crtc_enable(dev, state);1762168617631763- crtc_enable(dev, state);16871687+ drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state);1764168817651765- encoder_bridge_enable(dev, state);16891689+ drm_atomic_helper_commit_encoder_bridge_enable(dev, state);1766169017671691 drm_atomic_helper_commit_writebacks(dev, state);17681692}
+10
drivers/gpu/drm/drm_fb_helper.c
···366366{367367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work);368368369369+ if (helper->info->state != FBINFO_STATE_RUNNING)370370+ return;371371+369372 drm_fb_helper_fb_dirty(helper);370373}371374···734731 if (suspend) {735732 if (fb_helper->info->state != FBINFO_STATE_RUNNING)736733 return;734734+735735+ /*736736+ * Cancel pending damage work. During GPU reset, VBlank737737+ * interrupts are disabled and drm_fb_helper_fb_dirty()738738+ * would wait for VBlank timeout otherwise.739739+ */740740+ cancel_work_sync(&fb_helper->damage_work);737741738742 console_lock();739743
···10021002 return PTR_ERR(dsi->next_bridge);10031003 }1004100410051005- /*10061006- * set flag to request the DSI host bridge be pre-enabled before device bridge10071007- * in the chain, so the DSI host is ready when the device bridge is pre-enabled10081008- */10091009- dsi->next_bridge->pre_enable_prev_first = true;10101010-10111005 drm_bridge_add(&dsi->bridge);1012100610131007 ret = component_add(host->dev, &mtk_dsi_component_ops);
···26262727 tidss_runtime_get(tidss);28282929- drm_atomic_helper_commit_modeset_disables(ddev, old_state);3030- drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY);3131- drm_atomic_helper_commit_modeset_enables(ddev, old_state);2929+ /*3030+ * TI's OLDI and DSI encoders need to be set up before the crtc is3131+ * enabled. Thus drm_atomic_helper_commit_modeset_enables() and3232+ * drm_atomic_helper_commit_modeset_disables() cannot be used here, as3333+ * they enable the crtc before bridges' pre-enable, and disable the crtc3434+ * after bridges' post-disable.3535+ *3636+ * Open code the functions here and first call the bridges' pre-enables,3737+ * then crtc enable, then bridges' post-enable (and vice versa for3838+ * disable).3939+ */4040+4141+ drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state);4242+ drm_atomic_helper_commit_crtc_disable(ddev, old_state);4343+ drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state);4444+4545+ drm_atomic_helper_update_legacy_modeset_state(ddev, old_state);4646+ drm_atomic_helper_calc_timestamping_constants(old_state);4747+ drm_atomic_helper_commit_crtc_set_mode(ddev, old_state);4848+4949+ drm_atomic_helper_commit_planes(ddev, old_state,5050+ DRM_PLANE_COMMIT_ACTIVE_ONLY);5151+5252+ drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state);5353+ drm_atomic_helper_commit_crtc_enable(ddev, old_state);5454+ drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state);5555+ drm_atomic_helper_commit_writebacks(ddev, old_state);32563357 drm_atomic_helper_commit_hw_done(old_state);3458 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1-1
drivers/gpu/nova-core/Kconfig
···33 depends on 64BIT44 depends on PCI55 depends on RUST66- depends on RUST_FW_LOADER_ABSTRACTIONS66+ select RUST_FW_LOADER_ABSTRACTIONS77 select AUXILIARY_BUS88 default n99 help
+8-6
drivers/gpu/nova-core/gsp/cmdq.rs
···588588 header.length(),589589 );590590591591+ let payload_length = header.payload_length();592592+591593 // Check that the driver read area is large enough for the message.592592- if slice_1.len() + slice_2.len() < header.length() {594594+ if slice_1.len() + slice_2.len() < payload_length {593595 return Err(EIO);594596 }595597596598 // Cut the message slices down to the actual length of the message.597597- let (slice_1, slice_2) = if slice_1.len() > header.length() {598598- // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`.599599- (slice_1.split_at(header.length()).0, &slice_2[0..0])599599+ let (slice_1, slice_2) = if slice_1.len() > payload_length {600600+ // PANIC: we checked above that `slice_1` is at least as long as `payload_length`.601601+ (slice_1.split_at(payload_length).0, &slice_2[0..0])600602 } else {601603 (602604 slice_1,603605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as604604- // large as `msg_header.length()`.605605- slice_2.split_at(header.length() - slice_1.len()).0,606606+ // large as `payload_length`.607607+ slice_2.split_at(payload_length - slice_1.len()).0,606608 )607609 };608610
+38-40
drivers/gpu/nova-core/gsp/fw.rs
···141141// are valid.142142unsafe impl FromBytes for GspFwWprMeta {}143143144144-type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1;145145-type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;144144+type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1;145145+type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1;146146147147impl GspFwWprMeta {148148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the···150150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self {151151 Self(bindings::GspFwWprMeta {152152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified.153153- magic: r570_144::GSP_FW_WPR_META_MAGIC as u64,154154- revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION),153153+ magic: bindings::GSP_FW_WPR_META_MAGIC as u64,154154+ revision: u64::from(bindings::GSP_FW_WPR_META_REVISION),155155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(),156156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size),157157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(),···315315#[repr(u32)]316316pub(crate) enum SeqBufOpcode {317317 // Core operation opcodes318318- CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,319319- CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,320320- CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,321321- CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,318318+ CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET,319319+ CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME,320320+ CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START,321321+ CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT,322322323323 // Delay opcode324324- DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,324324+ DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US,325325326326 // Register operation opcodes327327- RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,328328- RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,329329- RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,330330- RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,327327+ RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY,328328+ RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL,329329+ RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE,330330+ RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE,331331}332332333333impl fmt::Display for SeqBufOpcode {···351351352352 fn try_from(value: u32) -> Result<SeqBufOpcode> {353353 match value {354354- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {354354+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => {355355 Ok(SeqBufOpcode::CoreReset)356356 }357357- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {357357+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => {358358 Ok(SeqBufOpcode::CoreResume)359359 }360360- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {360360+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => {361361 Ok(SeqBufOpcode::CoreStart)362362 }363363- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {363363+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => {364364 Ok(SeqBufOpcode::CoreWaitForHalt)365365 }366366- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),367367- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {366366+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs),367367+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => {368368 Ok(SeqBufOpcode::RegModify)369369 }370370- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),371371- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),372372- r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),370370+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll),371371+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore),372372+ bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite),373373 _ => Err(EINVAL),374374 }375375 }···385385/// Wrapper for GSP sequencer register write payload.386386#[repr(transparent)]387387#[derive(Copy, Clone)]388388-pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);388388+pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE);389389390390impl RegWritePayload {391391 /// Returns the register address.···408408/// Wrapper for GSP sequencer register modify payload.409409#[repr(transparent)]410410#[derive(Copy, Clone)]411411-pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);411411+pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY);412412413413impl RegModifyPayload {414414 /// Returns the register address.···436436/// Wrapper for GSP sequencer register poll payload.437437#[repr(transparent)]438438#[derive(Copy, Clone)]439439-pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL);439439+pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL);440440441441impl RegPollPayload {442442 /// Returns the register address.···469469/// Wrapper for GSP sequencer delay payload.470470#[repr(transparent)]471471#[derive(Copy, Clone)]472472-pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US);472472+pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US);473473474474impl DelayUsPayload {475475 /// Returns the delay value in microseconds.···487487/// Wrapper for GSP sequencer register store payload.488488#[repr(transparent)]489489#[derive(Copy, Clone)]490490-pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE);490490+pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE);491491492492impl RegStorePayload {493493 /// Returns the register address.···510510511511/// Wrapper for GSP sequencer buffer command.512512#[repr(transparent)]513513-pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD);513513+pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD);514514515515impl SequencerBufferCmd {516516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid.···612612613613/// Wrapper for GSP run CPU sequencer RPC.614614#[repr(transparent)]615615-pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00);615615+pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00);616616617617impl RunCpuSequencer {618618 /// Returns the command index.···797797 }798798}799799800800-// SAFETY: We can't derive the Zeroable trait for this binding because the801801-// procedural macro doesn't support the syntax used by bindgen to create the802802-// __IncompleteArrayField types. So instead we implement it here, which is safe803803-// because these are explicitly padded structures only containing types for804804-// which any bit pattern, including all zeros, is valid.805805-unsafe impl Zeroable for bindings::rpc_message_header_v {}806806-807800/// GSP Message Element.808801///809802/// This is essentially a message header expected to be followed by the message data.···846853 self.inner.checkSum = checksum;847854 }848855849849- /// Returns the total length of the message.856856+ /// Returns the length of the message's payload.857857+ pub(crate) fn payload_length(&self) -> usize {858858+ // `rpc.length` includes the length of the RPC message header.859859+ num::u32_as_usize(self.inner.rpc.length)860860+ .saturating_sub(size_of::<bindings::rpc_message_header_v>())861861+ }862862+863863+ /// Returns the total length of the message, message and RPC headers included.850864 pub(crate) fn length(&self) -> usize {851851- // `rpc.length` includes the length of the GspRpcHeader but not the message header.852852- size_of::<Self>() - size_of::<bindings::rpc_message_header_v>()853853- + num::u32_as_usize(self.inner.rpc.length)865865+ size_of::<Self>() + self.payload_length()854866 }855867856868 // Returns the sequence number of the message.
+7-4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
···2424 unreachable_pub,2525 unsafe_op_in_unsafe_fn2626)]2727-use kernel::{2828- ffi,2929- prelude::Zeroable, //3030-};2727+use kernel::ffi;2828+use pin_init::MaybeZeroable;2929+3130include!("r570_144/bindings.rs");3131+3232+// SAFETY: This type has a size of zero, so its inclusion into another type should not affect their3333+// ability to implement `Zeroable`.3434+unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
···142142}143143EXPORT_SYMBOL_GPL(hv_call_get_partition_property);144144145145+#ifdef CONFIG_X86145146/*146147 * Corresponding sleep states have to be initialized in order for a subsequent147148 * HVCALL_ENTER_SLEEP_STATE call to succeed. Currently only S5 state as per···238237 BUG();239238240239}240240+#endif
+11-9
drivers/hv/mshv_regions.c
···58585959 page_order = folio_order(page_folio(page));6060 /* The hypervisor only supports 4K and 2M page sizes */6161- if (page_order && page_order != HPAGE_PMD_ORDER)6161+ if (page_order && page_order != PMD_ORDER)6262 return -EINVAL;63636464 stride = 1 << page_order;···494494 unsigned long mstart, mend;495495 int ret = -EPERM;496496497497- if (mmu_notifier_range_blockable(range))498498- mutex_lock(®ion->mutex);499499- else if (!mutex_trylock(®ion->mutex))500500- goto out_fail;501501-502502- mmu_interval_set_seq(mni, cur_seq);503503-504497 mstart = max(range->start, region->start_uaddr);505498 mend = min(range->end, region->start_uaddr +506499 (region->nr_pages << HV_HYP_PAGE_SHIFT));···501508 page_offset = HVPFN_DOWN(mstart - region->start_uaddr);502509 page_count = HVPFN_DOWN(mend - mstart);503510511511+ if (mmu_notifier_range_blockable(range))512512+ mutex_lock(®ion->mutex);513513+ else if (!mutex_trylock(®ion->mutex))514514+ goto out_fail;515515+516516+ mmu_interval_set_seq(mni, cur_seq);517517+504518 ret = mshv_region_remap_pages(region, HV_MAP_GPA_NO_ACCESS,505519 page_offset, page_count);506520 if (ret)507507- goto out_fail;521521+ goto out_unlock;508522509523 mshv_region_invalidate_pages(region, page_offset, page_count);510524···519519520520 return true;521521522522+out_unlock:523523+ mutex_unlock(®ion->mutex);522524out_fail:523525 WARN_ONCE(ret,524526 "Failed to invalidate region %#llx-%#llx (range %#lx-%#lx, event: %u, pages %#llx-%#llx, mm: %#llx): %d\n",
···4141 depends on DEBUG_KERNEL4242 depends on FAULT_INJECTION4343 depends on RUNTIME_TESTING_MENU4444- depends on IOMMU_PT_AMDV14444+ depends on IOMMU_PT_AMDV1=y || IOMMUFD=IOMMU_PT_AMDV14545+ select DMA_SHARED_BUFFER4546 select IOMMUFD_DRIVER4647 default n4748 help
+1-1
drivers/irqchip/irq-gic-v5-its.c
···849849850850 itte = gicv5_its_device_get_itte_ref(its_dev, event_id);851851852852- if (FIELD_GET(GICV5_ITTL2E_VALID, *itte))852852+ if (FIELD_GET(GICV5_ITTL2E_VALID, le64_to_cpu(*itte)))853853 return -EEXIST;854854855855 itt_entry = FIELD_PREP(GICV5_ITTL2E_LPI_ID, lpi) |
+8-2
drivers/irqchip/irq-riscv-imsic-state.c
···477477 lpriv = per_cpu_ptr(imsic->lpriv, cpu);478478479479 bitmap_free(lpriv->dirty_bitmap);480480+ kfree(lpriv->vectors);480481 }481482482483 free_percpu(imsic->lpriv);···491490 int cpu, i;492491493492 /* Allocate per-CPU private state */494494- imsic->lpriv = __alloc_percpu(struct_size(imsic->lpriv, vectors, global->nr_ids + 1),495495- __alignof__(*imsic->lpriv));493493+ imsic->lpriv = alloc_percpu(typeof(*imsic->lpriv));496494 if (!imsic->lpriv)497495 return -ENOMEM;498496···510510 /* Setup lazy timer for synchronization */511511 timer_setup(&lpriv->timer, imsic_local_timer_callback, TIMER_PINNED);512512#endif513513+514514+ /* Allocate vector array */515515+ lpriv->vectors = kcalloc(global->nr_ids + 1, sizeof(*lpriv->vectors),516516+ GFP_KERNEL);517517+ if (!lpriv->vectors)518518+ goto fail_local_cleanup;513519514520 /* Setup vector array */515521 for (i = 0; i <= global->nr_ids; i++) {
···6677config IPU_BRIDGE88 tristate "Intel IPU Bridge"99- depends on ACPI || COMPILE_TEST99+ depends on ACPI1010 depends on I2C1111 help1212 The IPU bridge is a helper library for Intel IPU drivers to
+29
drivers/media/pci/intel/ipu-bridge.c
···55#include <acpi/acpi_bus.h>66#include <linux/cleanup.h>77#include <linux/device.h>88+#include <linux/dmi.h>89#include <linux/i2c.h>910#include <linux/mei_cl_bus.h>1011#include <linux/platform_device.h>···9796 IPU_SENSOR_CONFIG("SONY471A", 1, 200000000),9897 /* Toshiba T4KA3 */9998 IPU_SENSOR_CONFIG("XMCC0003", 1, 321468000),9999+};100100+101101+/*102102+ * DMI matches for laptops which have their sensor mounted upside-down103103+ * without reporting a rotation of 180° in neither the SSDB nor the _PLD.104104+ */105105+static const struct dmi_system_id upside_down_sensor_dmi_ids[] = {106106+ {107107+ .matches = {108108+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."),109109+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 13 9350"),110110+ },111111+ .driver_data = "OVTI02C1",112112+ },113113+ {114114+ .matches = {115115+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dell Inc."),116116+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "XPS 16 9640"),117117+ },118118+ .driver_data = "OVTI02C1",119119+ },120120+ {} /* Terminating entry */100121};101122102123static const struct ipu_property_names prop_names = {···271248static u32 ipu_bridge_parse_rotation(struct acpi_device *adev,272249 struct ipu_sensor_ssdb *ssdb)273250{251251+ const struct dmi_system_id *dmi_id;252252+253253+ dmi_id = dmi_first_match(upside_down_sensor_dmi_ids);254254+ if (dmi_id && acpi_dev_hid_match(adev, dmi_id->driver_data))255255+ return 180;256256+274257 switch (ssdb->degree) {275258 case IPU_SENSOR_ROTATION_NORMAL:276259 return 0;
···122122123123#define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */124124125125+#define MEI_DEV_ID_NVL_S 0x6E68 /* Nova Lake Point S */126126+125127/*126128 * MEI HW Section127129 */
···5566config MISC_RP177 tristate "RaspberryPi RP1 misc device"88- depends on OF_IRQ && OF_OVERLAY && PCI_MSI && PCI_QUIRKS99- select PCI_DYNAMIC_OF_NODES88+ depends on OF_IRQ && PCI_MSI109 help1110 Support the RP1 peripheral chip found on Raspberry Pi 5 board.1211···14151516 The driver is responsible for enabling the DT node once the PCIe1617 endpoint has been configured, and handling interrupts.1717-1818- This driver uses an overlay to load other drivers to support for1919- RP1 internal sub-devices.
···11-// SPDX-License-Identifier: (GPL-2.0 OR MIT)22-33-/*44- * The dts overlay is included from the dts directory so55- * it can be possible to check it with CHECK_DTBS while66- * also compile it from the driver source directory.77- */88-99-/dts-v1/;1010-/plugin/;1111-1212-/ {1313- fragment@0 {1414- target-path="";1515- __overlay__ {1616- compatible = "pci1de4,1";1717- #address-cells = <3>;1818- #size-cells = <2>;1919- interrupt-controller;2020- #interrupt-cells = <2>;2121-2222- #include "arm64/broadcom/rp1-common.dtsi"2323- };2424- };2525-};
+4-33
drivers/misc/rp1/rp1_pci.c
···3434/* Interrupts */3535#define RP1_INT_END 6136363737-/* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */3838-extern char __dtbo_rp1_pci_begin[];3939-extern char __dtbo_rp1_pci_end[];4040-4137struct rp1_dev {4238 struct pci_dev *pdev;4339 struct irq_domain *domain;4440 struct irq_data *pcie_irqds[64];4541 void __iomem *bar1;4646- int ovcs_id; /* overlay changeset id */4742 bool level_triggered_irq[RP1_INT_END];4843};4944···179184180185static int rp1_probe(struct pci_dev *pdev, const struct pci_device_id *id)181186{182182- u32 dtbo_size = __dtbo_rp1_pci_end - __dtbo_rp1_pci_begin;183183- void *dtbo_start = __dtbo_rp1_pci_begin;184187 struct device *dev = &pdev->dev;185188 struct device_node *rp1_node;186186- bool skip_ovl = true;187189 struct rp1_dev *rp1;188190 int err = 0;189191 int i;190192191191- /*192192- * Either use rp1_nexus node if already present in DT, or193193- * set a flag to load it from overlay at runtime194194- */195195- rp1_node = of_find_node_by_name(NULL, "rp1_nexus");196196- if (!rp1_node) {197197- rp1_node = dev_of_node(dev);198198- skip_ovl = false;199199- }193193+ rp1_node = dev_of_node(dev);200194201195 if (!rp1_node) {202196 dev_err(dev, "Missing of_node for device\n");···260276 rp1_chained_handle_irq, rp1);261277 }262278263263- if (!skip_ovl) {264264- err = of_overlay_fdt_apply(dtbo_start, dtbo_size, &rp1->ovcs_id,265265- rp1_node);266266- if (err)267267- goto err_unregister_interrupts;268268- }269269-270279 err = of_platform_default_populate(rp1_node, NULL, dev);271280 if (err) {272281 dev_err_probe(&pdev->dev, err, "Error populating devicetree\n");273273- goto err_unload_overlay;282282+ goto err_unregister_interrupts;274283 }275284276276- if (skip_ovl)277277- of_node_put(rp1_node);285285+ of_node_put(rp1_node);278286279287 return 0;280288281281-err_unload_overlay:282282- of_overlay_remove(&rp1->ovcs_id);283289err_unregister_interrupts:284290 rp1_unregister_interrupts(pdev);285291err_put_node:286286- if (skip_ovl)287287- of_node_put(rp1_node);292292+ of_node_put(rp1_node);288293289294 return err;290295}291296292297static void rp1_remove(struct pci_dev *pdev)293298{294294- struct rp1_dev *rp1 = pci_get_drvdata(pdev);295299 struct device *dev = &pdev->dev;296300297301 of_platform_depopulate(dev);298298- of_overlay_remove(&rp1->ovcs_id);299302 rp1_unregister_interrupts(pdev);300303}301304
+1-1
drivers/mtd/nand/ecc-sw-hamming.c
···88 *99 * Completely replaces the previous ECC implementation which was written by:1010 * Steven J. Hill (sjhill@realitydiluted.com)1111- * Thomas Gleixner (tglx@linutronix.de)1111+ * Thomas Gleixner (tglx@kernel.org)1212 *1313 * Information on how this algorithm works and how it was developed1414 * can be found in Documentation/driver-api/mtd/nand_ecc.rst
+1-1
drivers/mtd/nand/raw/diskonchip.c
···1111 * Error correction code lifted from the old docecc code1212 * Author: Fabrice Bellard (fabrice.bellard@netgem.com)1313 * Copyright (C) 2000 Netgem S.A.1414- * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@linutronix.de>1414+ * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@kernel.org>1515 *1616 * Interface to generic NAND code for M-Systems DiskOnChip devices1717 */
+2-2
drivers/mtd/nand/raw/nand_base.c
···88 * http://www.linux-mtd.infradead.org/doc/nand.html99 *1010 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)1111- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)1111+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)1212 *1313 * Credits:1414 * David Woodhouse for adding multichip support···6594659465956595MODULE_LICENSE("GPL");65966596MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");65976597-MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");65976597+MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>");65986598MODULE_DESCRIPTION("Generic NAND flash driver code");
···11// SPDX-License-Identifier: GPL-2.0-only22/*33- * Copyright (C) 2002 Thomas Gleixner (tglx@linutronix.de)33+ * Copyright (C) 2002 Thomas Gleixner (tglx@kernel.org)44 */5566#include <linux/sizes.h>
+1-1
drivers/mtd/nand/raw/nand_jedec.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/nand_legacy.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/nand_onfi.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)44- * 2002-2006 Thomas Gleixner (tglx@linutronix.de)44+ * 2002-2006 Thomas Gleixner (tglx@kernel.org)55 *66 * Credits:77 * David Woodhouse for adding multichip support
+1-1
drivers/mtd/nand/raw/ndfc.c
···272272module_platform_driver(ndfc_driver);273273274274MODULE_LICENSE("GPL");275275-MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");275275+MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>");276276MODULE_DESCRIPTION("OF Platform driver for NDFC");
+5-2
drivers/net/can/Kconfig
···11# SPDX-License-Identifier: GPL-2.0-only2233menuconfig CAN_DEV44- bool "CAN Device Drivers"44+ tristate "CAN Device Drivers"55 default y66 depends on CAN77 help···1717 virtual ones. If you own such devices or plan to use the virtual CAN1818 interfaces to develop applications, say Y here.19192020-if CAN_DEV && CAN2020+ To compile as a module, choose M here: the module will be called2121+ can-dev.2222+2323+if CAN_DEV21242225config CAN_VCAN2326 tristate "Virtual Local CAN Interface (vcan)"
···751751 hf, parent->hf_size_rx,752752 gs_usb_receive_bulk_callback, parent);753753754754+ usb_anchor_urb(urb, &parent->rx_submitted);755755+754756 rc = usb_submit_urb(urb, GFP_ATOMIC);755757756758 /* USB failure take down all interfaces */
+15
drivers/net/can/vcan.c
···130130 return NETDEV_TX_OK;131131}132132133133+static void vcan_set_cap_info(struct net_device *dev)134134+{135135+ u32 can_cap = CAN_CAP_CC;136136+137137+ if (dev->mtu > CAN_MTU)138138+ can_cap |= CAN_CAP_FD;139139+140140+ if (dev->mtu >= CANXL_MIN_MTU)141141+ can_cap |= CAN_CAP_XL;142142+143143+ can_set_cap(dev, can_cap);144144+}145145+133146static int vcan_change_mtu(struct net_device *dev, int new_mtu)134147{135148 /* Do not allow changing the MTU while running */···154141 return -EINVAL;155142156143 WRITE_ONCE(dev->mtu, new_mtu);144144+ vcan_set_cap_info(dev);157145 return 0;158146}159147···176162 dev->tx_queue_len = 0;177163 dev->flags = IFF_NOARP;178164 can_set_ml_priv(dev, netdev_priv(dev));165165+ vcan_set_cap_info(dev);179166180167 /* set flags according to driver capabilities */181168 if (echo)
+15
drivers/net/can/vxcan.c
···125125 return iflink;126126}127127128128+static void vxcan_set_cap_info(struct net_device *dev)129129+{130130+ u32 can_cap = CAN_CAP_CC;131131+132132+ if (dev->mtu > CAN_MTU)133133+ can_cap |= CAN_CAP_FD;134134+135135+ if (dev->mtu >= CANXL_MIN_MTU)136136+ can_cap |= CAN_CAP_XL;137137+138138+ can_set_cap(dev, can_cap);139139+}140140+128141static int vxcan_change_mtu(struct net_device *dev, int new_mtu)129142{130143 /* Do not allow changing the MTU while running */···149136 return -EINVAL;150137151138 WRITE_ONCE(dev->mtu, new_mtu);139139+ vxcan_set_cap_info(dev);152140 return 0;153141}154142···181167182168 can_ml = netdev_priv(dev) + ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN);183169 can_set_ml_priv(dev, can_ml);170170+ vxcan_set_cap_info(dev);184171}185172186173/* forward declaration for rtnl_create_link() */
···6326632663276327void mlx5e_priv_cleanup(struct mlx5e_priv *priv)63286328{63296329+ bool destroying = test_bit(MLX5E_STATE_DESTROYING, &priv->state);63296330 int i;6330633163316332 /* bail if change profile failed and also rollback failed */···63546353 }6355635463566355 memset(priv, 0, sizeof(*priv));63566356+ if (destroying) /* restore destroying bit, to allow unload */63576357+ set_bit(MLX5E_STATE_DESTROYING, &priv->state);63576358}6358635963596360static unsigned int mlx5e_get_max_num_txqs(struct mlx5_core_dev *mdev,···65886585 return err;65896586}6590658765916591-int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,65926592- const struct mlx5e_profile *new_profile, void *new_ppriv)65886588+int mlx5e_netdev_change_profile(struct net_device *netdev,65896589+ struct mlx5_core_dev *mdev,65906590+ const struct mlx5e_profile *new_profile,65916591+ void *new_ppriv)65936592{65946594- const struct mlx5e_profile *orig_profile = priv->profile;65956595- struct net_device *netdev = priv->netdev;65966596- struct mlx5_core_dev *mdev = priv->mdev;65976597- void *orig_ppriv = priv->ppriv;65936593+ struct mlx5e_priv *priv = netdev_priv(netdev);65946594+ const struct mlx5e_profile *orig_profile;65986595 int err, rollback_err;65966596+ void *orig_ppriv;6599659766006600- /* cleanup old profile */66016601- mlx5e_detach_netdev(priv);66026602- priv->profile->cleanup(priv);66036603- mlx5e_priv_cleanup(priv);65986598+ orig_profile = priv->profile;65996599+ orig_ppriv = priv->ppriv;66006600+66016601+ /* NULL could happen if previous change_profile failed to rollback */66026602+ if (priv->profile) {66036603+ WARN_ON_ONCE(priv->mdev != mdev);66046604+ /* cleanup old profile */66056605+ mlx5e_detach_netdev(priv);66066606+ priv->profile->cleanup(priv);66076607+ mlx5e_priv_cleanup(priv);66086608+ }66096609+ /* priv members are not valid from this point ... */6604661066056611 if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {66066612 mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv);···66266614 return 0;6627661566286616rollback:66176617+ if (!orig_profile) {66186618+ netdev_warn(netdev, "no original profile to rollback to\n");66196619+ priv->profile = NULL;66206620+ return err;66216621+ }66226622+66296623 rollback_err = mlx5e_netdev_attach_profile(netdev, mdev, orig_profile, orig_ppriv);66306630- if (rollback_err)66316631- netdev_err(netdev, "%s: failed to rollback to orig profile, %d\n",66326632- __func__, rollback_err);66246624+ if (rollback_err) {66256625+ netdev_err(netdev, "failed to rollback to orig profile, %d\n",66266626+ rollback_err);66276627+ priv->profile = NULL;66286628+ }66336629 return err;66346630}6635663166366636-void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv)66326632+void mlx5e_netdev_attach_nic_profile(struct net_device *netdev,66336633+ struct mlx5_core_dev *mdev)66376634{66386638- mlx5e_netdev_change_profile(priv, &mlx5e_nic_profile, NULL);66356635+ mlx5e_netdev_change_profile(netdev, mdev, &mlx5e_nic_profile, NULL);66396636}6640663766416641-void mlx5e_destroy_netdev(struct mlx5e_priv *priv)66386638+void mlx5e_destroy_netdev(struct net_device *netdev)66426639{66436643- struct net_device *netdev = priv->netdev;66406640+ struct mlx5e_priv *priv = netdev_priv(netdev);6644664166456645- mlx5e_priv_cleanup(priv);66426642+ if (priv->profile)66436643+ mlx5e_priv_cleanup(priv);66466644 free_netdev(netdev);66476645}66486646···66606638{66616639 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);66626640 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);66636663- struct mlx5e_priv *priv = mlx5e_dev->priv;66646664- struct net_device *netdev = priv->netdev;66416641+ struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev);66426642+ struct net_device *netdev = mlx5e_dev->netdev;66656643 struct mlx5_core_dev *mdev = edev->mdev;66666644 struct mlx5_core_dev *pos, *to;66676645 int err, i;···6707668567086686static int _mlx5e_suspend(struct auxiliary_device *adev, bool pre_netdev_reg)67096687{66886688+ struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);67106689 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);67116711- struct mlx5e_priv *priv = mlx5e_dev->priv;67126712- struct net_device *netdev = priv->netdev;67136713- struct mlx5_core_dev *mdev = priv->mdev;66906690+ struct mlx5e_priv *priv = netdev_priv(mlx5e_dev->netdev);66916691+ struct net_device *netdev = mlx5e_dev->netdev;66926692+ struct mlx5_core_dev *mdev = edev->mdev;67146693 struct mlx5_core_dev *pos;67156694 int i;67166695···67726749 goto err_devlink_port_unregister;67736750 }67746751 SET_NETDEV_DEVLINK_PORT(netdev, &mlx5e_dev->dl_port);67526752+ mlx5e_dev->netdev = netdev;6775675367766754 mlx5e_build_nic_netdev(netdev);6777675567786756 priv = netdev_priv(netdev);67796779- mlx5e_dev->priv = priv;6780675767816758 priv->profile = profile;67826759 priv->ppriv = NULL;···68096786err_profile_cleanup:68106787 profile->cleanup(priv);68116788err_destroy_netdev:68126812- mlx5e_destroy_netdev(priv);67896789+ mlx5e_destroy_netdev(netdev);68136790err_devlink_port_unregister:68146791 mlx5e_devlink_port_unregister(mlx5e_dev);68156792err_devlink_unregister:···68396816{68406817 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);68416818 struct mlx5e_dev *mlx5e_dev = auxiliary_get_drvdata(adev);68426842- struct mlx5e_priv *priv = mlx5e_dev->priv;68196819+ struct net_device *netdev = mlx5e_dev->netdev;68206820+ struct mlx5e_priv *priv = netdev_priv(netdev);68436821 struct mlx5_core_dev *mdev = edev->mdev;6844682268456823 mlx5_core_uplink_netdev_set(mdev, NULL);68466846- mlx5e_dcbnl_delete_app(priv);68246824+68256825+ if (priv->profile)68266826+ mlx5e_dcbnl_delete_app(priv);68476827 /* When unload driver, the netdev is in registered state68486828 * if it's from legacy mode. If from switchdev mode, it68496829 * is already unregistered before changing to NIC profile.68506830 */68516851- if (priv->netdev->reg_state == NETREG_REGISTERED) {68526852- unregister_netdev(priv->netdev);68316831+ if (netdev->reg_state == NETREG_REGISTERED) {68326832+ unregister_netdev(netdev);68536833 _mlx5e_suspend(adev, false);68546834 } else {68556835 struct mlx5_core_dev *pos;···68676841 /* Avoid cleanup if profile rollback failed. */68686842 if (priv->profile)68696843 priv->profile->cleanup(priv);68706870- mlx5e_destroy_netdev(priv);68446844+ mlx5e_destroy_netdev(netdev);68716845 mlx5e_devlink_port_unregister(mlx5e_dev);68726846 mlx5e_destroy_devlink(mlx5e_dev);68736847}
+7-8
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
···15081508{15091509 struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep);15101510 struct net_device *netdev;15111511- struct mlx5e_priv *priv;15121511 int err;1513151215141513 netdev = mlx5_uplink_netdev_get(dev);15151514 if (!netdev)15161515 return 0;1517151615181518- priv = netdev_priv(netdev);15191519- rpriv->netdev = priv->netdev;15201520- err = mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile,15211521- rpriv);15171517+ /* must not use netdev_priv(netdev), it might not be initialized yet */15181518+ rpriv->netdev = netdev;15191519+ err = mlx5e_netdev_change_profile(netdev, dev,15201520+ &mlx5e_uplink_rep_profile, rpriv);15221521 mlx5_uplink_netdev_put(dev, netdev);15231522 return err;15241523}···15451546 if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY))15461547 unregister_netdev(netdev);1547154815481548- mlx5e_netdev_attach_nic_profile(priv);15491549+ mlx5e_netdev_attach_nic_profile(netdev, priv->mdev);15491550}1550155115511552static int···16111612 priv->profile->cleanup(priv);1612161316131614err_destroy_netdev:16141614- mlx5e_destroy_netdev(netdev_priv(netdev));16151615+ mlx5e_destroy_netdev(netdev);16151616 return err;16161617}16171618···16661667 mlx5e_rep_vnic_reporter_destroy(priv);16671668 mlx5e_detach_netdev(priv);16681669 priv->profile->cleanup(priv);16691669- mlx5e_destroy_netdev(priv);16701670+ mlx5e_destroy_netdev(netdev);16701671free_ppriv:16711672 kvfree(ppriv); /* mlx5e_rep_priv */16721673}
+3
drivers/net/hyperv/netvsc_drv.c
···17501750 rxfh->hfunc != ETH_RSS_HASH_TOP)17511751 return -EOPNOTSUPP;1752175217531753+ if (!ndc->rx_table_sz)17541754+ return -EOPNOTSUPP;17551755+17531756 rndis_dev = ndev->extension;17541757 if (rxfh->indir) {17551758 for (i = 0; i < ndc->rx_table_sz; i++)
···17451745 val |= YT8521_LED_1000_ON_EN;1746174617471747 if (test_bit(TRIGGER_NETDEV_FULL_DUPLEX, &rules))17481748- val |= YT8521_LED_HDX_ON_EN;17481748+ val |= YT8521_LED_FDX_ON_EN;1749174917501750 if (test_bit(TRIGGER_NETDEV_HALF_DUPLEX, &rules))17511751- val |= YT8521_LED_FDX_ON_EN;17511751+ val |= YT8521_LED_HDX_ON_EN;1752175217531753 if (test_bit(TRIGGER_NETDEV_TX, &rules) ||17541754 test_bit(TRIGGER_NETDEV_RX, &rules))
+43-132
drivers/net/virtio_net.c
···425425 u16 rss_indir_table_size;426426 u32 rss_hash_types_supported;427427 u32 rss_hash_types_saved;428428- struct virtio_net_rss_config_hdr *rss_hdr;429429- struct virtio_net_rss_config_trailer rss_trailer;430430- u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE];431428432429 /* Has control virtqueue */433430 bool has_cvq;···438441 /* Packet virtio header size */439442 u8 hdr_len;440443441441- /* Work struct for delayed refilling if we run low on memory. */442442- struct delayed_work refill;443443-444444 /* UDP tunnel support */445445 bool tx_tnl;446446447447 bool rx_tnl;448448449449 bool rx_tnl_csum;450450-451451- /* Is delayed refill enabled? */452452- bool refill_enabled;453453-454454- /* The lock to synchronize the access to refill_enabled */455455- spinlock_t refill_lock;456450457451 /* Work struct for config space updates */458452 struct work_struct config_work;···481493 struct failover *failover;482494483495 u64 device_stats_cap;496496+497497+ struct virtio_net_rss_config_hdr *rss_hdr;498498+499499+ /* Must be last as it ends in a flexible-array member. */500500+ TRAILING_OVERLAP(struct virtio_net_rss_config_trailer, rss_trailer, hash_key_data,501501+ u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE];502502+ );484503};504504+static_assert(offsetof(struct virtnet_info, rss_trailer.hash_key_data) ==505505+ offsetof(struct virtnet_info, rss_hash_key_data));485506486507struct padded_vnet_hdr {487508 struct virtio_net_hdr_v1_hash hdr;···715718 give_pages(rq, buf);716719 else717720 put_page(virt_to_head_page(buf));718718-}719719-720720-static void enable_delayed_refill(struct virtnet_info *vi)721721-{722722- spin_lock_bh(&vi->refill_lock);723723- vi->refill_enabled = true;724724- spin_unlock_bh(&vi->refill_lock);725725-}726726-727727-static void disable_delayed_refill(struct virtnet_info *vi)728728-{729729- spin_lock_bh(&vi->refill_lock);730730- vi->refill_enabled = false;731731- spin_unlock_bh(&vi->refill_lock);732721}733722734723static void enable_rx_mode_work(struct virtnet_info *vi)···29312948 napi_disable(napi);29322949}2933295029342934-static void refill_work(struct work_struct *work)29352935-{29362936- struct virtnet_info *vi =29372937- container_of(work, struct virtnet_info, refill.work);29382938- bool still_empty;29392939- int i;29402940-29412941- for (i = 0; i < vi->curr_queue_pairs; i++) {29422942- struct receive_queue *rq = &vi->rq[i];29432943-29442944- /*29452945- * When queue API support is added in the future and the call29462946- * below becomes napi_disable_locked, this driver will need to29472947- * be refactored.29482948- *29492949- * One possible solution would be to:29502950- * - cancel refill_work with cancel_delayed_work (note:29512951- * non-sync)29522952- * - cancel refill_work with cancel_delayed_work_sync in29532953- * virtnet_remove after the netdev is unregistered29542954- * - wrap all of the work in a lock (perhaps the netdev29552955- * instance lock)29562956- * - check netif_running() and return early to avoid a race29572957- */29582958- napi_disable(&rq->napi);29592959- still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);29602960- virtnet_napi_do_enable(rq->vq, &rq->napi);29612961-29622962- /* In theory, this can happen: if we don't get any buffers in29632963- * we will *never* try to fill again.29642964- */29652965- if (still_empty)29662966- schedule_delayed_work(&vi->refill, HZ/2);29672967- }29682968-}29692969-29702951static int virtnet_receive_xsk_bufs(struct virtnet_info *vi,29712952 struct receive_queue *rq,29722953 int budget,···29933046 else29943047 packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats);2995304830493049+ u64_stats_set(&stats.packets, packets);29963050 if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {29972997- if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {29982998- spin_lock(&vi->refill_lock);29992999- if (vi->refill_enabled)30003000- schedule_delayed_work(&vi->refill, 0);30013001- spin_unlock(&vi->refill_lock);30023002- }30513051+ if (!try_fill_recv(vi, rq, GFP_ATOMIC))30523052+ /* We need to retry refilling in the next NAPI poll so30533053+ * we must return budget to make sure the NAPI is30543054+ * repolled.30553055+ */30563056+ packets = budget;30033057 }3004305830053005- u64_stats_set(&stats.packets, packets);30063059 u64_stats_update_begin(&rq->stats.syncp);30073060 for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) {30083061 size_t offset = virtnet_rq_stats_desc[i].offset;···31733226 struct virtnet_info *vi = netdev_priv(dev);31743227 int i, err;3175322831763176- enable_delayed_refill(vi);31773177-31783229 for (i = 0; i < vi->max_queue_pairs; i++) {31793230 if (i < vi->curr_queue_pairs)31803180- /* Make sure we have some buffers: if oom use wq. */31813181- if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))31823182- schedule_delayed_work(&vi->refill, 0);32313231+ /* Pre-fill rq agressively, to make sure we are ready to32323232+ * get packets immediately.32333233+ */32343234+ try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);3183323531843236 err = virtnet_enable_queue_pair(vi, i);31853237 if (err < 0)···31973251 return 0;3198325231993253err_enable_qp:32003200- disable_delayed_refill(vi);32013201- cancel_delayed_work_sync(&vi->refill);32023202-32033254 for (i--; i >= 0; i--) {32043255 virtnet_disable_queue_pair(vi, i);32053256 virtnet_cancel_dim(vi, &vi->rq[i].dim);···33753432 return NETDEV_TX_OK;33763433}3377343433783378-static void __virtnet_rx_pause(struct virtnet_info *vi,33793379- struct receive_queue *rq)34353435+static void virtnet_rx_pause(struct virtnet_info *vi,34363436+ struct receive_queue *rq)33803437{33813438 bool running = netif_running(vi->dev);33823439···33903447{33913448 int i;3392344933933393- /*33943394- * Make sure refill_work does not run concurrently to33953395- * avoid napi_disable race which leads to deadlock.33963396- */33973397- disable_delayed_refill(vi);33983398- cancel_delayed_work_sync(&vi->refill);33993450 for (i = 0; i < vi->max_queue_pairs; i++)34003400- __virtnet_rx_pause(vi, &vi->rq[i]);34513451+ virtnet_rx_pause(vi, &vi->rq[i]);34013452}3402345334033403-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)34543454+static void virtnet_rx_resume(struct virtnet_info *vi,34553455+ struct receive_queue *rq,34563456+ bool refill)34043457{34053405- /*34063406- * Make sure refill_work does not run concurrently to34073407- * avoid napi_disable race which leads to deadlock.34083408- */34093409- disable_delayed_refill(vi);34103410- cancel_delayed_work_sync(&vi->refill);34113411- __virtnet_rx_pause(vi, rq);34123412-}34583458+ if (netif_running(vi->dev)) {34593459+ /* Pre-fill rq agressively, to make sure we are ready to get34603460+ * packets immediately.34613461+ */34623462+ if (refill)34633463+ try_fill_recv(vi, rq, GFP_KERNEL);3413346434143414-static void __virtnet_rx_resume(struct virtnet_info *vi,34153415- struct receive_queue *rq,34163416- bool refill)34173417-{34183418- bool running = netif_running(vi->dev);34193419- bool schedule_refill = false;34203420-34213421- if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))34223422- schedule_refill = true;34233423- if (running)34243465 virtnet_napi_enable(rq);34253425-34263426- if (schedule_refill)34273427- schedule_delayed_work(&vi->refill, 0);34663466+ }34283467}3429346834303469static void virtnet_rx_resume_all(struct virtnet_info *vi)34313470{34323471 int i;3433347234343434- enable_delayed_refill(vi);34353473 for (i = 0; i < vi->max_queue_pairs; i++) {34363474 if (i < vi->curr_queue_pairs)34373437- __virtnet_rx_resume(vi, &vi->rq[i], true);34753475+ virtnet_rx_resume(vi, &vi->rq[i], true);34383476 else34393439- __virtnet_rx_resume(vi, &vi->rq[i], false);34773477+ virtnet_rx_resume(vi, &vi->rq[i], false);34403478 }34413441-}34423442-34433443-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)34443444-{34453445- enable_delayed_refill(vi);34463446- __virtnet_rx_resume(vi, rq, true);34473479}3448348034493481static int virtnet_rx_resize(struct virtnet_info *vi,···34343516 if (err)34353517 netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);3436351834373437- virtnet_rx_resume(vi, rq);35193519+ virtnet_rx_resume(vi, rq, true);34383520 return err;34393521}34403522···37473829 }37483830succ:37493831 vi->curr_queue_pairs = queue_pairs;37503750- /* virtnet_open() will refill when device is going to up. */37513751- spin_lock_bh(&vi->refill_lock);37523752- if (dev->flags & IFF_UP && vi->refill_enabled)37533753- schedule_delayed_work(&vi->refill, 0);37543754- spin_unlock_bh(&vi->refill_lock);38323832+ if (dev->flags & IFF_UP) {38333833+ local_bh_disable();38343834+ for (int i = 0; i < vi->curr_queue_pairs; ++i)38353835+ virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);38363836+ local_bh_enable();38373837+ }3755383837563839 return 0;37573840}···37623843 struct virtnet_info *vi = netdev_priv(dev);37633844 int i;3764384537653765- /* Make sure NAPI doesn't schedule refill work */37663766- disable_delayed_refill(vi);37673767- /* Make sure refill_work doesn't re-enable napi! */37683768- cancel_delayed_work_sync(&vi->refill);37693846 /* Prevent the config change callback from changing carrier37703847 * after close37713848 */···5717580257185803 virtio_device_ready(vdev);5719580457205720- enable_delayed_refill(vi);57215805 enable_rx_mode_work(vi);5722580657235807 if (netif_running(vi->dev)) {···5806589258075893 rq->xsk_pool = pool;5808589458095809- virtnet_rx_resume(vi, rq);58955895+ virtnet_rx_resume(vi, rq, true);5810589658115897 if (pool)58125898 return 0;···64736559 if (!vi->rq)64746560 goto err_rq;6475656164766476- INIT_DELAYED_WORK(&vi->refill, refill_work);64776562 for (i = 0; i < vi->max_queue_pairs; i++) {64786563 vi->rq[i].pages = NULL;64796564 netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,···6814690168156902 INIT_WORK(&vi->config_work, virtnet_config_changed_work);68166903 INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work);68176817- spin_lock_init(&vi->refill_lock);6818690468196905 if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) {68206906 vi->mergeable_rx_bufs = true;···70777165 net_failover_destroy(vi->failover);70787166free_vqs:70797167 virtio_reset_device(vdev);70807080- cancel_delayed_work_sync(&vi->refill);70817168 free_receive_page_frags(vi);70827169 virtnet_del_vqs(vi);70837170free:
···63086308DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node);63096309DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node);63106310DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node);63116311-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RPI, PCI_DEVICE_ID_RPI_RP1_C0, of_pci_make_dev_node);6312631163136312/*63146313 * Devices known to require a longer delay before first config space access
-7
drivers/pci/vgaarb.c
···652652 return true;653653 }654654655655- /*656656- * Vgadev has neither IO nor MEM enabled. If we haven't found any657657- * other VGA devices, it is the best candidate so far.658658- */659659- if (!boot_vga)660660- return true;661661-662655 return false;663656}664657
+1
drivers/pinctrl/Kconfig
···491491 depends on ARCH_MICROCHIP || COMPILE_TEST492492 depends on OF493493 select GENERIC_PINCONF494494+ select REGMAP_MMIO494495 default y495496 help496497 This selects the pinctrl driver for gpio2 on pic64gx.
···11691169 * This function should be used only if there is any requirement11701170* to check for FOS version below 6.3.11711171 * To check if the attached fabric is a brocade fabric, use11721172- * bfa_lps_is_brcd_fabric() which works for FOS versions 6.311721172+ * fabric->lps->brcd_switch which works for FOS versions 6.311731173 * or above only.11741174 */11751175
+24
drivers/scsi/scsi_error.c
···10631063 unsigned char *cmnd, int cmnd_size, unsigned sense_bytes)10641064{10651065 struct scsi_device *sdev = scmd->device;10661066+#ifdef CONFIG_BLK_INLINE_ENCRYPTION10671067+ struct request *rq = scsi_cmd_to_rq(scmd);10681068+#endif1066106910671070 /*10681071 * We need saved copies of a number of fields - this is because···11181115 (sdev->lun << 5 & 0xe0);1119111611201117 /*11181118+ * Encryption must be disabled for the commands submitted by the error handler.11191119+ * Hence, clear the encryption context information.11201120+ */11211121+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11221122+ ses->rq_crypt_keyslot = rq->crypt_keyslot;11231123+ ses->rq_crypt_ctx = rq->crypt_ctx;11241124+11251125+ rq->crypt_keyslot = NULL;11261126+ rq->crypt_ctx = NULL;11271127+#endif11281128+11291129+ /*11211130 * Zero the sense buffer. The scsi spec mandates that any11221131 * untransferred sense data should be interpreted as being zero.11231132 */···11461131 */11471132void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)11481133{11341134+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11351135+ struct request *rq = scsi_cmd_to_rq(scmd);11361136+#endif11371137+11491138 /*11501139 * Restore original data11511140 */···11621143 scmd->underflow = ses->underflow;11631144 scmd->prot_op = ses->prot_op;11641145 scmd->eh_eflags = ses->eh_eflags;11461146+11471147+#ifdef CONFIG_BLK_INLINE_ENCRYPTION11481148+ rq->crypt_keyslot = ses->rq_crypt_keyslot;11491149+ rq->crypt_ctx = ses->rq_crypt_ctx;11501150+#endif11651151}11661152EXPORT_SYMBOL(scsi_eh_restore_cmnd);11671153
+1-1
drivers/scsi/scsi_lib.c
···24592459 * @retries: number of retries before failing24602460 * @sshdr: outpout pointer for decoded sense information.24612461 *24622462- * Returns zero if unsuccessful or an error if TUR failed. For24622462+ * Returns zero if successful or an error if TUR failed. For24632463 * removable media, UNIT_ATTENTION sets ->changed flag.24642464 **/24652465int
···33 * drivers/uio/uio.c44 *55 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de>66- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de>88 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com>99 *
+7-6
drivers/xen/acpi.c
···8989 int *trigger_out,9090 int *polarity_out)9191{9292- int gsi;9292+ u32 gsi;9393 u8 pin;9494 struct acpi_prt_entry *entry;9595 int trigger = ACPI_LEVEL_SENSITIVE;9696- int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?9696+ int ret, polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?9797 ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW;98989999 if (!dev || !gsi_out || !trigger_out || !polarity_out)···105105106106 entry = acpi_pci_irq_lookup(dev, pin);107107 if (entry) {108108+ ret = 0;108109 if (entry->link)109109- gsi = acpi_pci_link_allocate_irq(entry->link,110110+ ret = acpi_pci_link_allocate_irq(entry->link,110111 entry->index,111112 &trigger, &polarity,112112- NULL);113113+ NULL, &gsi);113114 else114115 gsi = entry->index;115116 } else116116- gsi = -1;117117+ ret = -ENODEV;117118118118- if (gsi < 0)119119+ if (ret < 0)119120 return -EINVAL;120121121122 *gsi_out = gsi;
+1
fs/btrfs/disk-io.c
···22552255 BTRFS_DATA_RELOC_TREE_OBJECTID, true);22562256 if (IS_ERR(root)) {22572257 if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) {22582258+ location.objectid = BTRFS_DATA_RELOC_TREE_OBJECTID;22582259 ret = PTR_ERR(root);22592260 goto out;22602261 }
+32-9
fs/btrfs/inode.c
···481481 ASSERT(size <= sectorsize);482482483483 /*484484- * The compressed size also needs to be no larger than a sector.485485- * That's also why we only need one page as the parameter.484484+ * The compressed size also needs to be no larger than a page.485485+ * That's also why we only need one folio as the parameter.486486 */487487- if (compressed_folio)487487+ if (compressed_folio) {488488 ASSERT(compressed_size <= sectorsize);489489- else489489+ ASSERT(compressed_size <= PAGE_SIZE);490490+ } else {490491 ASSERT(compressed_size == 0);492492+ }491493492494 if (compressed_size && compressed_folio)493495 cur_size = compressed_size;···574572575573 /* Inline extents must start at offset 0. */576574 if (offset != 0)575575+ return false;576576+577577+ /*578578+ * Even for bs > ps cases, cow_file_range_inline() can only accept a579579+ * single folio.580580+ *581581+ * This can be problematic and cause access beyond page boundary if a582582+ * page sized folio is passed into that function.583583+ * And encoded write is doing exactly that.584584+ * So here limits the inlined extent size to PAGE_SIZE.585585+ */586586+ if (size > PAGE_SIZE || compressed_size > PAGE_SIZE)577587 return false;578588579589 /* Inline extents are limited to sectorsize. */···40484034 btrfs_set_inode_mapping_order(inode);4049403540504036cache_index:40514051- ret = btrfs_init_file_extent_tree(inode);40524052- if (ret)40534053- goto out;40544054- btrfs_inode_set_file_extent_range(inode, 0,40554055- round_up(i_size_read(vfs_inode), fs_info->sectorsize));40564037 /*40574038 * If we were modified in the current generation and evicted from memory40584039 * and then re-read we need to do a full sync since we don't have any···41334124 "error loading props for ino %llu (root %llu): %d",41344125 btrfs_ino(inode), btrfs_root_id(root), ret);41354126 }41274127+41284128+ /*41294129+ * We don't need the path anymore, so release it to avoid holding a read41304130+ * lock on a leaf while calling btrfs_init_file_extent_tree(), which can41314131+ * allocate memory that triggers reclaim (GFP_KERNEL) and cause a locking41324132+ * dependency.41334133+ */41344134+ btrfs_release_path(path);41354135+41364136+ ret = btrfs_init_file_extent_tree(inode);41374137+ if (ret)41384138+ goto out;41394139+ btrfs_inode_set_file_extent_range(inode, 0,41404140+ round_up(i_size_read(vfs_inode), fs_info->sectorsize));4136414141374142 if (!maybe_acls)41384143 cache_no_acl(vfs_inode);
+5-7
fs/btrfs/super.c
···736736 */737737void btrfs_set_free_space_cache_settings(struct btrfs_fs_info *fs_info)738738{739739- if (fs_info->sectorsize < PAGE_SIZE) {739739+ if (fs_info->sectorsize != PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) {740740+ btrfs_info(fs_info,741741+ "forcing free space tree for sector size %u with page size %lu",742742+ fs_info->sectorsize, PAGE_SIZE);740743 btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE);741741- if (!btrfs_test_opt(fs_info, FREE_SPACE_TREE)) {742742- btrfs_info(fs_info,743743- "forcing free space tree for sector size %u with page size %lu",744744- fs_info->sectorsize, PAGE_SIZE);745745- btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE);746746- }744744+ btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE);747745 }748746749747 /*
+1-1
fs/btrfs/tree-log.c
···190190191191 btrfs_abort_transaction(wc->trans, error);192192193193- if (wc->subvol_path->nodes[0]) {193193+ if (wc->subvol_path && wc->subvol_path->nodes[0]) {194194 btrfs_crit(fs_info,195195 "subvolume (root %llu) leaf currently being processed:",196196 btrfs_root_id(wc->root));
···644644 * fs contexts (including its own) due to self-controlled RO645645 * accesses/contexts and no side-effect changes that need to646646 * context save & restore so it can reuse the current thread647647- * context. However, it still needs to bump `s_stack_depth` to648648- * avoid kernel stack overflow from nested filesystems.647647+ * context.648648+ * However, we still need to prevent kernel stack overflow due649649+ * to filesystem nesting: just ensure that s_stack_depth is 0650650+ * to disallow mounting EROFS on stacked filesystems.651651+ * Note: s_stack_depth is not incremented here for now, since652652+ * EROFS is the only fs supporting file-backed mounts for now.653653+ * It MUST change if another fs plans to support them, which654654+ * may also require adjusting FILESYSTEM_MAX_STACK_DEPTH.649655 */650656 if (erofs_is_fileio_mode(sbi)) {651651- sb->s_stack_depth =652652- file_inode(sbi->dif0.file)->i_sb->s_stack_depth + 1;653653- if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) {654654- erofs_err(sb, "maximum fs stacking depth exceeded");657657+ inode = file_inode(sbi->dif0.file);658658+ if ((inode->i_sb->s_op == &erofs_sops &&659659+ !inode->i_sb->s_bdev) ||660660+ inode->i_sb->s_stack_depth) {661661+ erofs_err(sb, "file-backed mounts cannot be applied to stacked fses");655662 return -ENOTBLK;656663 }657664 }
···15931593 * @hashval: hash value (usually inode number) to search for15941594 * @test: callback used for comparisons between inodes15951595 * @data: opaque data pointer to pass to @test15961596+ * @isnew: return argument telling whether I_NEW was set when15971597+ * the inode was found in hash (the caller needs to15981598+ * wait for I_NEW to clear)15961599 *15971600 * Search for the inode specified by @hashval and @data in the inode cache.15981601 * If the inode is in the cache, the inode is returned with an incremented
+35-15
fs/iomap/buffered-io.c
···832832 if (!mapping_large_folio_support(iter->inode->i_mapping))833833 len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos));834834835835- if (iter->fbatch) {835835+ if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) {836836 struct folio *folio = folio_batch_next(iter->fbatch);837837838838 if (!folio)···929929 * process so return and let the caller iterate and refill the batch.930930 */931931 if (!folio) {932932- WARN_ON_ONCE(!iter->fbatch);932932+ WARN_ON_ONCE(!(iter->iomap.flags & IOMAP_F_FOLIO_BATCH));933933 return 0;934934 }935935···15441544 return status;15451545}1546154615471547-loff_t15471547+/**15481548+ * iomap_fill_dirty_folios - fill a folio batch with dirty folios15491549+ * @iter: Iteration structure15501550+ * @start: Start offset of range. Updated based on lookup progress.15511551+ * @end: End offset of range15521552+ * @iomap_flags: Flags to set on the associated iomap to track the batch.15531553+ *15541554+ * Returns the folio count directly. Also returns the associated control flag if15551555+ * the the batch lookup is performed and the expected offset of a subsequent15561556+ * lookup via out params. The caller is responsible to set the flag on the15571557+ * associated iomap.15581558+ */15591559+unsigned int15481560iomap_fill_dirty_folios(15491561 struct iomap_iter *iter,15501550- loff_t offset,15511551- loff_t length)15621562+ loff_t *start,15631563+ loff_t end,15641564+ unsigned int *iomap_flags)15521565{15531566 struct address_space *mapping = iter->inode->i_mapping;15541554- pgoff_t start = offset >> PAGE_SHIFT;15551555- pgoff_t end = (offset + length - 1) >> PAGE_SHIFT;15671567+ pgoff_t pstart = *start >> PAGE_SHIFT;15681568+ pgoff_t pend = (end - 1) >> PAGE_SHIFT;15691569+ unsigned int count;1556157015571557- iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL);15581558- if (!iter->fbatch)15591559- return offset + length;15601560- folio_batch_init(iter->fbatch);15711571+ if (!iter->fbatch) {15721572+ *start = end;15731573+ return 0;15741574+ }1561157515621562- filemap_get_folios_dirty(mapping, &start, end, iter->fbatch);15631563- return (start << PAGE_SHIFT);15761576+ count = filemap_get_folios_dirty(mapping, &pstart, pend, iter->fbatch);15771577+ *start = (pstart << PAGE_SHIFT);15781578+ *iomap_flags |= IOMAP_F_FOLIO_BATCH;15791579+ return count;15641580}15651581EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios);15661582···15851569 const struct iomap_ops *ops,15861570 const struct iomap_write_ops *write_ops, void *private)15871571{15721572+ struct folio_batch fbatch;15881573 struct iomap_iter iter = {15891574 .inode = inode,15901575 .pos = pos,15911576 .len = len,15921577 .flags = IOMAP_ZERO,15931578 .private = private,15791579+ .fbatch = &fbatch,15941580 };15951581 struct address_space *mapping = inode->i_mapping;15961582 int ret;15971583 bool range_dirty;15841584+15851585+ folio_batch_init(&fbatch);1598158615991587 /*16001588 * To avoid an unconditional flush, check pagecache state and only flush···16101590 while ((ret = iomap_iter(&iter, ops)) > 0) {16111591 const struct iomap *srcmap = iomap_iter_srcmap(&iter);1612159216131613- if (WARN_ON_ONCE(iter.fbatch &&15931593+ if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&16141594 srcmap->type != IOMAP_UNWRITTEN))16151595 return -EIO;1616159616171617- if (!iter.fbatch &&15971597+ if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&16181598 (srcmap->type == IOMAP_HOLE ||16191599 srcmap->type == IOMAP_UNWRITTEN)) {16201600 s64 status;
···369369 while (!list_empty(dispose)) {370370 flc = list_first_entry(dispose, struct file_lock_core, flc_list);371371 list_del_init(&flc->flc_list);372372- if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))373373- locks_free_lease(file_lease(flc));374374- else375375- locks_free_lock(file_lock(flc));372372+ locks_free_lock(file_lock(flc));373373+ }374374+}375375+376376+static void377377+lease_dispose_list(struct list_head *dispose)378378+{379379+ struct file_lock_core *flc;380380+381381+ while (!list_empty(dispose)) {382382+ flc = list_first_entry(dispose, struct file_lock_core, flc_list);383383+ list_del_init(&flc->flc_list);384384+ locks_free_lease(file_lease(flc));376385 }377386}378387···585576 __f_setown(filp, task_pid(current), PIDTYPE_TGID, 0);586577}587578579579+/**580580+ * lease_open_conflict - see if the given file points to an inode that has581581+ * an existing open that would conflict with the582582+ * desired lease.583583+ * @filp: file to check584584+ * @arg: type of lease that we're trying to acquire585585+ *586586+ * Check to see if there's an existing open fd on this file that would587587+ * conflict with the lease we're trying to set.588588+ */589589+static int590590+lease_open_conflict(struct file *filp, const int arg)591591+{592592+ struct inode *inode = file_inode(filp);593593+ int self_wcount = 0, self_rcount = 0;594594+595595+ if (arg == F_RDLCK)596596+ return inode_is_open_for_write(inode) ? -EAGAIN : 0;597597+ else if (arg != F_WRLCK)598598+ return 0;599599+600600+ /*601601+ * Make sure that only read/write count is from lease requestor.602602+ * Note that this will result in denying write leases when i_writecount603603+ * is negative, which is what we want. (We shouldn't grant write leases604604+ * on files open for execution.)605605+ */606606+ if (filp->f_mode & FMODE_WRITE)607607+ self_wcount = 1;608608+ else if (filp->f_mode & FMODE_READ)609609+ self_rcount = 1;610610+611611+ if (atomic_read(&inode->i_writecount) != self_wcount ||612612+ atomic_read(&inode->i_readcount) != self_rcount)613613+ return -EAGAIN;614614+615615+ return 0;616616+}617617+588618static const struct lease_manager_operations lease_manager_ops = {589619 .lm_break = lease_break_callback,590620 .lm_change = lease_modify,591621 .lm_setup = lease_setup,622622+ .lm_open_conflict = lease_open_conflict,592623};593624594625/*···16691620 spin_unlock(&ctx->flc_lock);16701621 percpu_up_read(&file_rwsem);1671162216721672- locks_dispose_list(&dispose);16231623+ lease_dispose_list(&dispose);16731624 error = wait_event_interruptible_timeout(new_fl->c.flc_wait,16741625 list_empty(&new_fl->c.flc_blocked_member),16751626 break_time);···16921643out:16931644 spin_unlock(&ctx->flc_lock);16941645 percpu_up_read(&file_rwsem);16951695- locks_dispose_list(&dispose);16461646+ lease_dispose_list(&dispose);16961647free_lock:16971648 locks_free_lease(new_fl);16981649 return error;···17761727 spin_unlock(&ctx->flc_lock);17771728 percpu_up_read(&file_rwsem);1778172917791779- locks_dispose_list(&dispose);17301730+ lease_dispose_list(&dispose);17801731 }17811732 return type;17821733}···17911742 if (deleg->d_flags != 0 || deleg->__pad != 0)17921743 return -EINVAL;17931744 deleg->d_type = __fcntl_getlease(filp, FL_DELEG);17941794- return 0;17951795-}17961796-17971797-/**17981798- * check_conflicting_open - see if the given file points to an inode that has17991799- * an existing open that would conflict with the18001800- * desired lease.18011801- * @filp: file to check18021802- * @arg: type of lease that we're trying to acquire18031803- * @flags: current lock flags18041804- *18051805- * Check to see if there's an existing open fd on this file that would18061806- * conflict with the lease we're trying to set.18071807- */18081808-static int18091809-check_conflicting_open(struct file *filp, const int arg, int flags)18101810-{18111811- struct inode *inode = file_inode(filp);18121812- int self_wcount = 0, self_rcount = 0;18131813-18141814- if (flags & FL_LAYOUT)18151815- return 0;18161816- if (flags & FL_DELEG)18171817- /* We leave these checks to the caller */18181818- return 0;18191819-18201820- if (arg == F_RDLCK)18211821- return inode_is_open_for_write(inode) ? -EAGAIN : 0;18221822- else if (arg != F_WRLCK)18231823- return 0;18241824-18251825- /*18261826- * Make sure that only read/write count is from lease requestor.18271827- * Note that this will result in denying write leases when i_writecount18281828- * is negative, which is what we want. (We shouldn't grant write leases18291829- * on files open for execution.)18301830- */18311831- if (filp->f_mode & FMODE_WRITE)18321832- self_wcount = 1;18331833- else if (filp->f_mode & FMODE_READ)18341834- self_rcount = 1;18351835-18361836- if (atomic_read(&inode->i_writecount) != self_wcount ||18371837- atomic_read(&inode->i_readcount) != self_rcount)18381838- return -EAGAIN;18391839-18401745 return 0;18411746}18421747···18301827 percpu_down_read(&file_rwsem);18311828 spin_lock(&ctx->flc_lock);18321829 time_out_leases(inode, &dispose);18331833- error = check_conflicting_open(filp, arg, lease->c.flc_flags);18301830+ error = lease->fl_lmops->lm_open_conflict(filp, arg);18341831 if (error)18351832 goto out;18361833···18871884 * precedes these checks.18881885 */18891886 smp_mb();18901890- error = check_conflicting_open(filp, arg, lease->c.flc_flags);18871887+ error = lease->fl_lmops->lm_open_conflict(filp, arg);18911888 if (error) {18921889 locks_unlink_lock_ctx(&lease->c);18931890 goto out;···18991896out:19001897 spin_unlock(&ctx->flc_lock);19011898 percpu_up_read(&file_rwsem);19021902- locks_dispose_list(&dispose);18991899+ lease_dispose_list(&dispose);19031900 if (is_deleg)19041901 inode_unlock(inode);19051902 if (!error && !my_fl)···19351932 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);19361933 spin_unlock(&ctx->flc_lock);19371934 percpu_up_read(&file_rwsem);19381938- locks_dispose_list(&dispose);19351935+ lease_dispose_list(&dispose);19391936 return error;19401937}19411938···27382735 spin_unlock(&ctx->flc_lock);27392736 percpu_up_read(&file_rwsem);2740273727412741- locks_dispose_list(&dispose);27382738+ lease_dispose_list(&dispose);27422739}2743274027442741/*
+15-6
fs/namei.c
···830830static bool legitimize_links(struct nameidata *nd)831831{832832 int i;833833- if (unlikely(nd->flags & LOOKUP_CACHED)) {834834- drop_links(nd);835835- nd->depth = 0;836836- return false;837837- }833833+834834+ VFS_BUG_ON(nd->flags & LOOKUP_CACHED);835835+838836 for (i = 0; i < nd->depth; i++) {839837 struct saved *last = nd->stack + i;840838 if (unlikely(!legitimize_path(nd, &last->link, last->seq))) {···881883882884 BUG_ON(!(nd->flags & LOOKUP_RCU));883885886886+ if (unlikely(nd->flags & LOOKUP_CACHED)) {887887+ drop_links(nd);888888+ nd->depth = 0;889889+ goto out1;890890+ }884891 if (unlikely(nd->depth && !legitimize_links(nd)))885892 goto out1;886893 if (unlikely(!legitimize_path(nd, &nd->path, nd->seq)))···921918 int res;922919 BUG_ON(!(nd->flags & LOOKUP_RCU));923920921921+ if (unlikely(nd->flags & LOOKUP_CACHED)) {922922+ drop_links(nd);923923+ nd->depth = 0;924924+ goto out2;925925+ }924926 if (unlikely(nd->depth && !legitimize_links(nd)))925927 goto out2;926928 res = __legitimize_mnt(nd->path.mnt, nd->m_seq);···28442836}2845283728462838/**28472847- * start_dirop - begin a create or remove dirop, performing locking and lookup28392839+ * __start_dirop - begin a create or remove dirop, performing locking and lookup28482840 * @parent: the dentry of the parent in which the operation will occur28492841 * @name: a qstr holding the name within that parent28502842 * @lookup_flags: intent and other lookup flags.28432843+ * @state: task state bitmask28512844 *28522845 * The lookup is performed and necessary locks are taken so that, on success,28532846 * the returned dentry can be operated on safely.
···764764 return lease_modify(onlist, arg, dispose);765765}766766767767+/**768768+ * nfsd4_layout_lm_open_conflict - see if the given file points to an inode that has769769+ * an existing open that would conflict with the770770+ * desired lease.771771+ * @filp: file to check772772+ * @arg: type of lease that we're trying to acquire773773+ *774774+ * The kernel will call into this operation to determine whether there775775+ * are conflicting opens that may prevent the layout from being granted.776776+ * For nfsd, that check is done at a higher level, so this trivially777777+ * returns 0.778778+ */779779+static int780780+nfsd4_layout_lm_open_conflict(struct file *filp, int arg)781781+{782782+ return 0;783783+}784784+767785static const struct lease_manager_operations nfsd4_layouts_lm_ops = {768768- .lm_break = nfsd4_layout_lm_break,769769- .lm_change = nfsd4_layout_lm_change,786786+ .lm_break = nfsd4_layout_lm_break,787787+ .lm_change = nfsd4_layout_lm_change,788788+ .lm_open_conflict = nfsd4_layout_lm_open_conflict,770789};771790772791int
+19
fs/nfsd/nfs4state.c
···55555555 return -EAGAIN;55565556}5557555755585558+/**55595559+ * nfsd4_deleg_lm_open_conflict - see if the given file points to an inode that has55605560+ * an existing open that would conflict with the55615561+ * desired lease.55625562+ * @filp: file to check55635563+ * @arg: type of lease that we're trying to acquire55645564+ *55655565+ * The kernel will call into this operation to determine whether there55665566+ * are conflicting opens that may prevent the deleg from being granted.55675567+ * For nfsd, that check is done at a higher level, so this trivially55685568+ * returns 0.55695569+ */55705570+static int55715571+nfsd4_deleg_lm_open_conflict(struct file *filp, int arg)55725572+{55735573+ return 0;55745574+}55755575+55585576static const struct lease_manager_operations nfsd_lease_mng_ops = {55595577 .lm_breaker_owns_lease = nfsd_breaker_owns_lease,55605578 .lm_break = nfsd_break_deleg_cb,55615579 .lm_change = nfsd_change_deleg_cb,55805580+ .lm_open_conflict = nfsd4_deleg_lm_open_conflict,55625581};5563558255645583static __be32 nfsd4_check_seqid(struct nfsd4_compound_state *cstate, struct nfs4_stateowner *so, u32 seqid)
+18
fs/pidfs.c
···517517 switch (cmd) {518518 /* Namespaces that hang of nsproxy. */519519 case PIDFD_GET_CGROUP_NAMESPACE:520520+#ifdef CONFIG_CGROUPS520521 if (!ns_ref_get(nsp->cgroup_ns))521522 break;522523 ns_common = to_ns_common(nsp->cgroup_ns);524524+#endif523525 break;524526 case PIDFD_GET_IPC_NAMESPACE:527527+#ifdef CONFIG_IPC_NS525528 if (!ns_ref_get(nsp->ipc_ns))526529 break;527530 ns_common = to_ns_common(nsp->ipc_ns);531531+#endif528532 break;529533 case PIDFD_GET_MNT_NAMESPACE:530534 if (!ns_ref_get(nsp->mnt_ns))···536532 ns_common = to_ns_common(nsp->mnt_ns);537533 break;538534 case PIDFD_GET_NET_NAMESPACE:535535+#ifdef CONFIG_NET_NS539536 if (!ns_ref_get(nsp->net_ns))540537 break;541538 ns_common = to_ns_common(nsp->net_ns);539539+#endif542540 break;543541 case PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE:542542+#ifdef CONFIG_PID_NS544543 if (!ns_ref_get(nsp->pid_ns_for_children))545544 break;546545 ns_common = to_ns_common(nsp->pid_ns_for_children);546546+#endif547547 break;548548 case PIDFD_GET_TIME_NAMESPACE:549549+#ifdef CONFIG_TIME_NS549550 if (!ns_ref_get(nsp->time_ns))550551 break;551552 ns_common = to_ns_common(nsp->time_ns);553553+#endif552554 break;553555 case PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE:556556+#ifdef CONFIG_TIME_NS554557 if (!ns_ref_get(nsp->time_ns_for_children))555558 break;556559 ns_common = to_ns_common(nsp->time_ns_for_children);560560+#endif557561 break;558562 case PIDFD_GET_UTS_NAMESPACE:563563+#ifdef CONFIG_UTS_NS559564 if (!ns_ref_get(nsp->uts_ns))560565 break;561566 ns_common = to_ns_common(nsp->uts_ns);567567+#endif562568 break;563569 /* Namespaces that don't hang of nsproxy. */564570 case PIDFD_GET_USER_NAMESPACE:571571+#ifdef CONFIG_USER_NS565572 scoped_guard(rcu) {566573 struct user_namespace *user_ns;567574···581566 break;582567 ns_common = to_ns_common(user_ns);583568 }569569+#endif584570 break;585571 case PIDFD_GET_PID_NAMESPACE:572572+#ifdef CONFIG_PID_NS586573 scoped_guard(rcu) {587574 struct pid_namespace *pid_ns;588575···593576 break;594577 ns_common = to_ns_common(pid_ns);595578 }579579+#endif596580 break;597581 default:598582 return -ENOIOCTLCMD;
···176176 /**177177 * @disable:178178 *179179- * The @disable callback should disable the bridge.179179+ * This callback should disable the bridge. It is called right before180180+ * the preceding element in the display pipe is disabled. If the181181+ * preceding element is a bridge this means it's called before that182182+ * bridge's @disable vfunc. If the preceding element is a &drm_encoder183183+ * it's called right before the &drm_encoder_helper_funcs.disable,184184+ * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms185185+ * hook.180186 *181187 * The bridge can assume that the display pipe (i.e. clocks and timing182188 * signals) feeding it is still running when this callback is called.183183- *184184- *185185- * If the preceding element is a &drm_bridge, then this is called before186186- * that bridge is disabled via one of:187187- *188188- * - &drm_bridge_funcs.disable189189- * - &drm_bridge_funcs.atomic_disable190190- *191191- * If the preceding element of the bridge is a display controller, then192192- * this callback is called before the encoder is disabled via one of:193193- *194194- * - &drm_encoder_helper_funcs.atomic_disable195195- * - &drm_encoder_helper_funcs.prepare196196- * - &drm_encoder_helper_funcs.disable197197- * - &drm_encoder_helper_funcs.dpms198198- *199199- * and the CRTC is disabled via one of:200200- *201201- * - &drm_crtc_helper_funcs.prepare202202- * - &drm_crtc_helper_funcs.atomic_disable203203- * - &drm_crtc_helper_funcs.disable204204- * - &drm_crtc_helper_funcs.dpms.205189 *206190 * The @disable callback is optional.207191 *···199215 /**200216 * @post_disable:201217 *218218+ * This callback should disable the bridge. It is called right after the219219+ * preceding element in the display pipe is disabled. If the preceding220220+ * element is a bridge this means it's called after that bridge's221221+ * @post_disable function. If the preceding element is a &drm_encoder222222+ * it's called right after the encoder's223223+ * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare224224+ * or &drm_encoder_helper_funcs.dpms hook.225225+ *202226 * The bridge must assume that the display pipe (i.e. clocks and timing203203- * signals) feeding this bridge is no longer running when the204204- * @post_disable is called.205205- *206206- * This callback should perform all the actions required by the hardware207207- * after it has stopped receiving signals from the preceding element.208208- *209209- * If the preceding element is a &drm_bridge, then this is called after210210- * that bridge is post-disabled (unless marked otherwise by the211211- * @pre_enable_prev_first flag) via one of:212212- *213213- * - &drm_bridge_funcs.post_disable214214- * - &drm_bridge_funcs.atomic_post_disable215215- *216216- * If the preceding element of the bridge is a display controller, then217217- * this callback is called after the encoder is disabled via one of:218218- *219219- * - &drm_encoder_helper_funcs.atomic_disable220220- * - &drm_encoder_helper_funcs.prepare221221- * - &drm_encoder_helper_funcs.disable222222- * - &drm_encoder_helper_funcs.dpms223223- *224224- * and the CRTC is disabled via one of:225225- *226226- * - &drm_crtc_helper_funcs.prepare227227- * - &drm_crtc_helper_funcs.atomic_disable228228- * - &drm_crtc_helper_funcs.disable229229- * - &drm_crtc_helper_funcs.dpms227227+ * signals) feeding it is no longer running when this callback is228228+ * called.230229 *231230 * The @post_disable callback is optional.232231 *···252285 /**253286 * @pre_enable:254287 *288288+ * This callback should enable the bridge. It is called right before289289+ * the preceding element in the display pipe is enabled. If the290290+ * preceding element is a bridge this means it's called before that291291+ * bridge's @pre_enable function. If the preceding element is a292292+ * &drm_encoder it's called right before the encoder's293293+ * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or294294+ * &drm_encoder_helper_funcs.dpms hook.295295+ *255296 * The display pipe (i.e. clocks and timing signals) feeding this bridge256256- * will not yet be running when the @pre_enable is called.257257- *258258- * This callback should perform all the necessary actions to prepare the259259- * bridge to accept signals from the preceding element.260260- *261261- * If the preceding element is a &drm_bridge, then this is called before262262- * that bridge is pre-enabled (unless marked otherwise by263263- * @pre_enable_prev_first flag) via one of:264264- *265265- * - &drm_bridge_funcs.pre_enable266266- * - &drm_bridge_funcs.atomic_pre_enable267267- *268268- * If the preceding element of the bridge is a display controller, then269269- * this callback is called before the CRTC is enabled via one of:270270- *271271- * - &drm_crtc_helper_funcs.atomic_enable272272- * - &drm_crtc_helper_funcs.commit273273- *274274- * and the encoder is enabled via one of:275275- *276276- * - &drm_encoder_helper_funcs.atomic_enable277277- * - &drm_encoder_helper_funcs.enable278278- * - &drm_encoder_helper_funcs.commit297297+ * will not yet be running when this callback is called. The bridge must298298+ * not enable the display link feeding the next bridge in the chain (if299299+ * there is one) when this callback is called.279300 *280301 * The @pre_enable callback is optional.281302 *···277322 /**278323 * @enable:279324 *280280- * The @enable callback should enable the bridge.325325+ * This callback should enable the bridge. It is called right after326326+ * the preceding element in the display pipe is enabled. If the327327+ * preceding element is a bridge this means it's called after that328328+ * bridge's @enable function. If the preceding element is a329329+ * &drm_encoder it's called right after the encoder's330330+ * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or331331+ * &drm_encoder_helper_funcs.dpms hook.281332 *282333 * The bridge can assume that the display pipe (i.e. clocks and timing283334 * signals) feeding it is running when this callback is called. This284335 * callback must enable the display link feeding the next bridge in the285336 * chain if there is one.286286- *287287- * If the preceding element is a &drm_bridge, then this is called after288288- * that bridge is enabled via one of:289289- *290290- * - &drm_bridge_funcs.enable291291- * - &drm_bridge_funcs.atomic_enable292292- *293293- * If the preceding element of the bridge is a display controller, then294294- * this callback is called after the CRTC is enabled via one of:295295- *296296- * - &drm_crtc_helper_funcs.atomic_enable297297- * - &drm_crtc_helper_funcs.commit298298- *299299- * and the encoder is enabled via one of:300300- *301301- * - &drm_encoder_helper_funcs.atomic_enable302302- * - &drm_encoder_helper_funcs.enable303303- * - drm_encoder_helper_funcs.commit304337 *305338 * The @enable callback is optional.306339 *···302359 /**303360 * @atomic_pre_enable:304361 *362362+ * This callback should enable the bridge. It is called right before363363+ * the preceding element in the display pipe is enabled. If the364364+ * preceding element is a bridge this means it's called before that365365+ * bridge's @atomic_pre_enable or @pre_enable function. If the preceding366366+ * element is a &drm_encoder it's called right before the encoder's367367+ * &drm_encoder_helper_funcs.atomic_enable hook.368368+ *305369 * The display pipe (i.e. clocks and timing signals) feeding this bridge306306- * will not yet be running when the @atomic_pre_enable is called.307307- *308308- * This callback should perform all the necessary actions to prepare the309309- * bridge to accept signals from the preceding element.310310- *311311- * If the preceding element is a &drm_bridge, then this is called before312312- * that bridge is pre-enabled (unless marked otherwise by313313- * @pre_enable_prev_first flag) via one of:314314- *315315- * - &drm_bridge_funcs.pre_enable316316- * - &drm_bridge_funcs.atomic_pre_enable317317- *318318- * If the preceding element of the bridge is a display controller, then319319- * this callback is called before the CRTC is enabled via one of:320320- *321321- * - &drm_crtc_helper_funcs.atomic_enable322322- * - &drm_crtc_helper_funcs.commit323323- *324324- * and the encoder is enabled via one of:325325- *326326- * - &drm_encoder_helper_funcs.atomic_enable327327- * - &drm_encoder_helper_funcs.enable328328- * - &drm_encoder_helper_funcs.commit370370+ * will not yet be running when this callback is called. The bridge must371371+ * not enable the display link feeding the next bridge in the chain (if372372+ * there is one) when this callback is called.329373 *330374 * The @atomic_pre_enable callback is optional.331375 */···322392 /**323393 * @atomic_enable:324394 *325325- * The @atomic_enable callback should enable the bridge.395395+ * This callback should enable the bridge. It is called right after396396+ * the preceding element in the display pipe is enabled. If the397397+ * preceding element is a bridge this means it's called after that398398+ * bridge's @atomic_enable or @enable function. If the preceding element399399+ * is a &drm_encoder it's called right after the encoder's400400+ * &drm_encoder_helper_funcs.atomic_enable hook.326401 *327402 * The bridge can assume that the display pipe (i.e. clocks and timing328403 * signals) feeding it is running when this callback is called. This329404 * callback must enable the display link feeding the next bridge in the330405 * chain if there is one.331331- *332332- * If the preceding element is a &drm_bridge, then this is called after333333- * that bridge is enabled via one of:334334- *335335- * - &drm_bridge_funcs.enable336336- * - &drm_bridge_funcs.atomic_enable337337- *338338- * If the preceding element of the bridge is a display controller, then339339- * this callback is called after the CRTC is enabled via one of:340340- *341341- * - &drm_crtc_helper_funcs.atomic_enable342342- * - &drm_crtc_helper_funcs.commit343343- *344344- * and the encoder is enabled via one of:345345- *346346- * - &drm_encoder_helper_funcs.atomic_enable347347- * - &drm_encoder_helper_funcs.enable348348- * - drm_encoder_helper_funcs.commit349406 *350407 * The @atomic_enable callback is optional.351408 */···341424 /**342425 * @atomic_disable:343426 *344344- * The @atomic_disable callback should disable the bridge.427427+ * This callback should disable the bridge. It is called right before428428+ * the preceding element in the display pipe is disabled. If the429429+ * preceding element is a bridge this means it's called before that430430+ * bridge's @atomic_disable or @disable vfunc. If the preceding element431431+ * is a &drm_encoder it's called right before the432432+ * &drm_encoder_helper_funcs.atomic_disable hook.345433 *346434 * The bridge can assume that the display pipe (i.e. clocks and timing347435 * signals) feeding it is still running when this callback is called.348348- *349349- * If the preceding element is a &drm_bridge, then this is called before350350- * that bridge is disabled via one of:351351- *352352- * - &drm_bridge_funcs.disable353353- * - &drm_bridge_funcs.atomic_disable354354- *355355- * If the preceding element of the bridge is a display controller, then356356- * this callback is called before the encoder is disabled via one of:357357- *358358- * - &drm_encoder_helper_funcs.atomic_disable359359- * - &drm_encoder_helper_funcs.prepare360360- * - &drm_encoder_helper_funcs.disable361361- * - &drm_encoder_helper_funcs.dpms362362- *363363- * and the CRTC is disabled via one of:364364- *365365- * - &drm_crtc_helper_funcs.prepare366366- * - &drm_crtc_helper_funcs.atomic_disable367367- * - &drm_crtc_helper_funcs.disable368368- * - &drm_crtc_helper_funcs.dpms.369436 *370437 * The @atomic_disable callback is optional.371438 */···359458 /**360459 * @atomic_post_disable:361460 *461461+ * This callback should disable the bridge. It is called right after the462462+ * preceding element in the display pipe is disabled. If the preceding463463+ * element is a bridge this means it's called after that bridge's464464+ * @atomic_post_disable or @post_disable function. If the preceding465465+ * element is a &drm_encoder it's called right after the encoder's466466+ * &drm_encoder_helper_funcs.atomic_disable hook.467467+ *362468 * The bridge must assume that the display pipe (i.e. clocks and timing363363- * signals) feeding this bridge is no longer running when the364364- * @atomic_post_disable is called.365365- *366366- * This callback should perform all the actions required by the hardware367367- * after it has stopped receiving signals from the preceding element.368368- *369369- * If the preceding element is a &drm_bridge, then this is called after370370- * that bridge is post-disabled (unless marked otherwise by the371371- * @pre_enable_prev_first flag) via one of:372372- *373373- * - &drm_bridge_funcs.post_disable374374- * - &drm_bridge_funcs.atomic_post_disable375375- *376376- * If the preceding element of the bridge is a display controller, then377377- * this callback is called after the encoder is disabled via one of:378378- *379379- * - &drm_encoder_helper_funcs.atomic_disable380380- * - &drm_encoder_helper_funcs.prepare381381- * - &drm_encoder_helper_funcs.disable382382- * - &drm_encoder_helper_funcs.dpms383383- *384384- * and the CRTC is disabled via one of:385385- *386386- * - &drm_crtc_helper_funcs.prepare387387- * - &drm_crtc_helper_funcs.atomic_disable388388- * - &drm_crtc_helper_funcs.disable389389- * - &drm_crtc_helper_funcs.dpms469469+ * signals) feeding it is no longer running when this callback is470470+ * called.390471 *391472 * The @atomic_post_disable callback is optional.392473 */
···626626#endif627627628628 /* All ancestors including self */629629- struct cgroup *ancestors[];629629+ union {630630+ DECLARE_FLEX_ARRAY(struct cgroup *, ancestors);631631+ struct {632632+ struct cgroup *_root_ancestor;633633+ DECLARE_FLEX_ARRAY(struct cgroup *, _low_ancestors);634634+ };635635+ };630636};631637632638/*···653647 struct list_head root_list;654648 struct rcu_head rcu; /* Must be near the top */655649656656- /*657657- * The root cgroup. The containing cgroup_root will be destroyed on its658658- * release. cgrp->ancestors[0] will be used overflowing into the659659- * following field. cgrp_ancestor_storage must immediately follow.660660- */661661- struct cgroup cgrp;662662-663663- /* must follow cgrp for cgrp->ancestors[0], see above */664664- struct cgroup *cgrp_ancestor_storage;665665-666650 /* Number of cgroups in the hierarchy, used only for /proc/cgroups */667651 atomic_t nr_cgrps;668652···664668665669 /* The name for this hierarchy - may be empty */666670 char name[MAX_CGROUP_ROOT_NAMELEN];671671+672672+ /*673673+ * The root cgroup. The containing cgroup_root will be destroyed on its674674+ * release. This must be embedded last due to flexible array at the end675675+ * of struct cgroup.676676+ */677677+ struct cgroup cgrp;667678};668679669680/*
···11671167 */11681168struct ftrace_graph_ent {11691169 unsigned long func; /* Current function */11701170- unsigned long depth;11701170+ long depth; /* signed to check for less than zero */11711171} __packed;1172117211731173/*
+1-1
include/linux/hrtimer.h
···22/*33 * hrtimers - High-resolution kernel timers44 *55- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar77 *88 * data type definitions, declarations, prototypes
+6-2
include/linux/iomap.h
···8888/*8989 * Flags set by the core iomap code during operations:9090 *9191+ * IOMAP_F_FOLIO_BATCH indicates that the folio batch mechanism is active9292+ * for this operation, set by iomap_fill_dirty_folios().9393+ *9194 * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size9295 * has changed as the result of this write operation.9396 *···9895 * range it covers needs to be remapped by the high level before the operation9996 * can proceed.10097 */9898+#define IOMAP_F_FOLIO_BATCH (1U << 13)10199#define IOMAP_F_SIZE_CHANGED (1U << 14)102100#define IOMAP_F_STALE (1U << 15)103101···356352int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len,357353 const struct iomap_ops *ops,358354 const struct iomap_write_ops *write_ops);359359-loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset,360360- loff_t length);355355+unsigned int iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t *start,356356+ loff_t end, unsigned int *iomap_flags);361357int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,362358 bool *did_zero, const struct iomap_ops *ops,363359 const struct iomap_write_ops *write_ops, void *private);
+1-1
include/linux/ktime.h
···33 *44 * ktime_t - nanosecond-resolution time format.55 *66- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar88 *99 * data type definitions, declarations, prototypes and macros.
···22/*33 * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com>44 * David Woodhouse <dwmw2@infradead.org>55- * Thomas Gleixner <tglx@linutronix.de>55+ * Thomas Gleixner <tglx@kernel.org>66 *77 * This file is the header for the NAND Hamming ECC implementation.88 */
···33 * include/linux/uio_driver.h44 *55 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de>66- * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de>88 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com>99 *
+6
include/net/dropreason-core.h
···6767 FN(TC_EGRESS) \6868 FN(SECURITY_HOOK) \6969 FN(QDISC_DROP) \7070+ FN(QDISC_BURST_DROP) \7071 FN(QDISC_OVERLIMIT) \7172 FN(QDISC_CONGESTED) \7273 FN(CAKE_FLOOD) \···375374 * failed to enqueue to current qdisc)376375 */377376 SKB_DROP_REASON_QDISC_DROP,377377+ /**378378+ * @SKB_DROP_REASON_QDISC_BURST_DROP: dropped when net.core.qdisc_max_burst379379+ * limit is hit.380380+ */381381+ SKB_DROP_REASON_QDISC_BURST_DROP,378382 /**379383 * @SKB_DROP_REASON_QDISC_OVERLIMIT: dropped by qdisc when a qdisc380384 * instance exceeds its total buffer size limit.
+1
include/net/hotdata.h
···4242 int netdev_budget_usecs;4343 int tstamp_prequeue;4444 int max_backlog;4545+ int qdisc_max_burst;4546 int dev_tx_weight;4647 int dev_rx_weight;4748 int sysctl_max_skb_frags;
···195195} __attribute__((packed));196196197197/**198198- * enum mali_c55_param_buffer_version - Mali-C55 parameters block versioning199199- *200200- * @MALI_C55_PARAM_BUFFER_V1: First version of Mali-C55 parameters block201201- */202202-enum mali_c55_param_buffer_version {203203- MALI_C55_PARAM_BUFFER_V1,204204-};205205-206206-/**207198 * enum mali_c55_param_block_type - Enumeration of Mali-C55 parameter blocks208199 *209200 * This enumeration defines the types of Mali-C55 parameters block. Each block
+1-1
include/uapi/linux/perf_event.h
···22/*33 * Performance events:44 *55- * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar77 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra88 *
+1-1
include/uapi/linux/xattr.h
···2323#define XATTR_REPLACE 0x2 /* set value, fail if attr does not exist */24242525struct xattr_args {2626- __aligned_u64 __user value;2626+ __aligned_u64 value;2727 __u32 size;2828 __u32 flags;2929};
+4-7
io_uring/io-wq.c
···947947 return ret;948948}949949950950-static bool io_wq_for_each_worker(struct io_wq *wq,950950+static void io_wq_for_each_worker(struct io_wq *wq,951951 bool (*func)(struct io_worker *, void *),952952 void *data)953953{954954- for (int i = 0; i < IO_WQ_ACCT_NR; i++) {955955- if (!io_acct_for_each_worker(&wq->acct[i], func, data))956956- return false;957957- }958958-959959- return true;954954+ for (int i = 0; i < IO_WQ_ACCT_NR; i++)955955+ if (io_acct_for_each_worker(&wq->acct[i], func, data))956956+ break;960957}961958962959static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+5
kernel/bpf/verifier.c
···96099609 if (reg->type != PTR_TO_MAP_VALUE)96109610 return -EINVAL;9611961196129612+ if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) {96139613+ verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno);96149614+ return -EACCES;96159615+ }96169616+96129617 if (!bpf_map_is_rdonly(map)) {96139618 verbose(env, "R%d does not point to a readonly map'\n", regno);96149619 return -EACCES;
+1-1
kernel/cgroup/cgroup.c
···58475847 int ret;5848584858495849 /* allocate the cgroup and its ID, 0 is reserved for the root */58505850- cgrp = kzalloc(struct_size(cgrp, ancestors, (level + 1)), GFP_KERNEL);58505850+ cgrp = kzalloc(struct_size(cgrp, _low_ancestors, level), GFP_KERNEL);58515851 if (!cgrp)58525852 return ERR_PTR(-ENOMEM);58535853
···1515 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>1616 *1717 * Scaled math optimizations by Thomas Gleixner1818- * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de>1818+ * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1919 *2020 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra2121 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1-1
kernel/sched/pelt.c
···1515 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>1616 *1717 * Scaled math optimizations by Thomas Gleixner1818- * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de>1818+ * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1919 *2020 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra2121 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1-1
kernel/time/clockevents.c
···22/*33 * This file contains functions which manage clock event devices.44 *55- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar77 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner88 */
+1-1
kernel/time/hrtimer.c
···11// SPDX-License-Identifier: GPL-2.022/*33- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>33+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>44 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar55 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner66 *
+1-1
kernel/time/tick-broadcast.c
···33 * This file contains functions which emulate a local clock-event44 * device via a broadcast event source.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-common.c
···33 * This file contains the base functions to manage periodic tick44 * related events.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-oneshot.c
···33 * This file contains functions which manage high resolution tick44 * related events.55 *66- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>66+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>77 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar88 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner99 */
+1-1
kernel/time/tick-sched.c
···11// SPDX-License-Identifier: GPL-2.022/*33- * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>33+ * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>44 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar55 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner66 *
···138138 * by commas.139139 */140140/* Set to string format zero to disable by default */141141-char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";141141+static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";142142143143/* When set, tracing will stop when a WARN*() is hit */144144static int __disable_trace_on_warning;···30123012 struct ftrace_stack *fstack;30133013 struct stack_entry *entry;30143014 int stackidx;30153015+ int bit;30163016+30173017+ bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START);30183018+ if (bit < 0)30193019+ return;3015302030163021 /*30173022 * Add one, for this function and the call to save_stack_trace()···30853080 /* Again, don't let gcc optimize things here */30863081 barrier();30873082 __this_cpu_dec(ftrace_stack_reserve);30833083+ trace_clear_recursion(bit);30883084}3089308530903086static inline void ftrace_trace_stack(struct trace_array *tr,
+3-4
kernel/trace/trace_events.c
···826826 * When soft_disable is set and enable is set, we want to827827 * register the tracepoint for the event, but leave the event828828 * as is. That means, if the event was already enabled, we do829829- * nothing (but set soft_mode). If the event is disabled, we830830- * set SOFT_DISABLED before enabling the event tracepoint, so831831- * it still seems to be disabled.829829+ * nothing. If the event is disabled, we set SOFT_DISABLED830830+ * before enabling the event tracepoint, so it still seems831831+ * to be disabled.832832 */833833 if (!soft_disable)834834 clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags);835835 else {836836 if (atomic_inc_return(&file->sm_ref) > 1)837837 break;838838- soft_mode = true;839838 /* Enable use of trace_buffered_event */840839 trace_buffered_event_enable();841840 }
···22/*33 * Generic infrastructure for lifetime debugging of objects.44 *55- * Copyright (C) 2008, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright (C) 2008, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 */7788#define pr_fmt(fmt) "ODEBUG: " fmt
+1-1
lib/plist.c
···1010 * 2001-2005 (c) MontaVista Software, Inc.1111 * Daniel Walker <dwalker@mvista.com>1212 *1313- * (C) 2005 Thomas Gleixner <tglx@linutronix.de>1313+ * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>1414 *1515 * Simplifications of the original code by1616 * Oleg Nesterov <oleg@tv-sign.ru>
+1-1
lib/reed_solomon/decode_rs.c
···55 * Copyright 2002, Phil Karn, KA9Q66 * May be used under the terms of the GNU General Public License (GPL)77 *88- * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de)88+ * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org)99 *1010 * Generic data width independent code which is included by the wrappers.1111 */
+1-1
lib/reed_solomon/encode_rs.c
···55 * Copyright 2002, Phil Karn, KA9Q66 * May be used under the terms of the GNU General Public License (GPL)77 *88- * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de)88+ * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org)99 *1010 * Generic data width independent code which is included by the wrappers.1111 */
+1-1
lib/reed_solomon/reed_solomon.c
···22/*33 * Generic Reed Solomon encoder / decoder library44 *55- * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)55+ * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org)66 *77 * Reed Solomon code lifted from reed solomon library written by Phil Karn88 * Copyright 2002 Phil Karn, KA9Q
+1
net/bluetooth/hci_sync.c
···44204420 if (bis_capable(hdev)) {44214421 events[1] |= 0x20; /* LE PA Report */44224422 events[1] |= 0x40; /* LE PA Sync Established */44234423+ events[1] |= 0x80; /* LE PA Sync Lost */44234424 events[3] |= 0x04; /* LE Create BIG Complete */44244425 events[3] |= 0x08; /* LE Terminate BIG Complete */44254426 events[3] |= 0x10; /* LE BIG Sync Established */
+17-8
net/bpf/test_run.c
···12941294 batch_size = NAPI_POLL_WEIGHT;12951295 else if (batch_size > TEST_XDP_MAX_BATCH)12961296 return -E2BIG;12971297-12981298- headroom += sizeof(struct xdp_page_head);12991297 } else if (batch_size) {13001298 return -EINVAL;13011299 }···13061308 /* There can't be user provided data before the meta data */13071309 if (ctx->data_meta || ctx->data_end > kattr->test.data_size_in ||13081310 ctx->data > ctx->data_end ||13091309- unlikely(xdp_metalen_invalid(ctx->data)) ||13101311 (do_live && (kattr->test.data_out || kattr->test.ctx_out)))13111312 goto free_ctx;13121312- /* Meta data is allocated from the headroom */13131313- headroom -= ctx->data;1314131313151314 meta_sz = ctx->data;13151315+ if (xdp_metalen_invalid(meta_sz) || meta_sz > headroom - sizeof(struct xdp_frame))13161316+ goto free_ctx;13171317+13181318+ /* Meta data is allocated from the headroom */13191319+ headroom -= meta_sz;13161320 linear_sz = ctx->data_end;13171321 }13221322+13231323+ /* The xdp_page_head structure takes up space in each page, limiting the13241324+ * size of the packet data; add the extra size to headroom here to make13251325+ * sure it's accounted in the length checks below, but not in the13261326+ * metadata size check above.13271327+ */13281328+ if (do_live)13291329+ headroom += sizeof(struct xdp_page_head);1318133013191331 max_linear_sz = PAGE_SIZE - headroom - tailroom;13201332 linear_sz = min_t(u32, linear_sz, max_linear_sz);···1363135513641356 if (sinfo->nr_frags == MAX_SKB_FRAGS) {13651357 ret = -ENOMEM;13661366- goto out;13581358+ goto out_put_dev;13671359 }1368136013691361 page = alloc_page(GFP_KERNEL);13701362 if (!page) {13711363 ret = -ENOMEM;13721372- goto out;13641364+ goto out_put_dev;13731365 }1374136613751367 frag = &sinfo->frags[sinfo->nr_frags++];···13811373 if (copy_from_user(page_address(page), data_in + size,13821374 data_len)) {13831375 ret = -EFAULT;13841384- goto out;13761376+ goto out_put_dev;13851377 }13861378 sinfo->xdp_frags_size += data_len;13871379 size += data_len;···13961388 ret = bpf_test_run_xdp_live(prog, &xdp, repeat, batch_size, &duration);13971389 else13981390 ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true);13911391+out_put_dev:13991392 /* We convert the xdp_buff back to an xdp_md before checking the return14001393 * code so the reference count of any held netdevice will be decremented14011394 * even if the test run failed.
···221221 if (test_bit(BR_FDB_LOCAL, &dst->flags))222222 return br_pass_frame_up(skb, false);223223224224- if (now != dst->used)225225- dst->used = now;224224+ if (now != READ_ONCE(dst->used))225225+ WRITE_ONCE(dst->used, now);226226 br_forward(dst->dst, skb, local_rcv, false);227227 } else {228228 if (!mcast_hit)
+9-1
net/can/j1939/transport.c
···1695169516961696 j1939_session_timers_cancel(session);16971697 j1939_session_cancel(session, J1939_XTP_ABORT_BUSY);16981698- if (session->transmission)16981698+ if (session->transmission) {16991699 j1939_session_deactivate_activate_next(session);17001700+ } else if (session->state == J1939_SESSION_WAITING_ABORT) {17011701+ /* Force deactivation for the receiver.17021702+ * If we rely on the timer starting in j1939_session_cancel,17031703+ * a second RTS call here will cancel that timer and fail17041704+ * to restart it because the state is already WAITING_ABORT.17051705+ */17061706+ j1939_session_deactivate_activate_next(session);17071707+ }1700170817011709 return -EBUSY;17021710 }
+10-41
net/can/raw.c
···4949#include <linux/if_arp.h>5050#include <linux/skbuff.h>5151#include <linux/can.h>5252+#include <linux/can/can-ml.h>5253#include <linux/can/core.h>5353-#include <linux/can/dev.h> /* for can_is_canxl_dev_mtu() */5454#include <linux/can/skb.h>5555#include <linux/can/raw.h>5656#include <net/sock.h>···892892 }893893}894894895895-static inline bool raw_dev_cc_enabled(struct net_device *dev,896896- struct can_priv *priv)897897-{898898- /* The CANXL-only mode disables error-signalling on the CAN bus899899- * which is needed to send CAN CC/FD frames900900- */901901- if (priv)902902- return !can_dev_in_xl_only_mode(priv);903903-904904- /* virtual CAN interfaces always support CAN CC */905905- return true;906906-}907907-908908-static inline bool raw_dev_fd_enabled(struct net_device *dev,909909- struct can_priv *priv)910910-{911911- /* check FD ctrlmode on real CAN interfaces */912912- if (priv)913913- return (priv->ctrlmode & CAN_CTRLMODE_FD);914914-915915- /* check MTU for virtual CAN FD interfaces */916916- return (READ_ONCE(dev->mtu) >= CANFD_MTU);917917-}918918-919919-static inline bool raw_dev_xl_enabled(struct net_device *dev,920920- struct can_priv *priv)921921-{922922- /* check XL ctrlmode on real CAN interfaces */923923- if (priv)924924- return (priv->ctrlmode & CAN_CTRLMODE_XL);925925-926926- /* check MTU for virtual CAN XL interfaces */927927- return can_is_canxl_dev_mtu(READ_ONCE(dev->mtu));928928-}929929-930895static unsigned int raw_check_txframe(struct raw_sock *ro, struct sk_buff *skb,931896 struct net_device *dev)932897{933933- struct can_priv *priv = safe_candev_priv(dev);934934-935898 /* Classical CAN */936936- if (can_is_can_skb(skb) && raw_dev_cc_enabled(dev, priv))899899+ if (can_is_can_skb(skb) && can_cap_enabled(dev, CAN_CAP_CC))937900 return CAN_MTU;938901939902 /* CAN FD */940903 if (ro->fd_frames && can_is_canfd_skb(skb) &&941941- raw_dev_fd_enabled(dev, priv))904904+ can_cap_enabled(dev, CAN_CAP_FD))942905 return CANFD_MTU;943906944907 /* CAN XL */945908 if (ro->xl_frames && can_is_canxl_skb(skb) &&946946- raw_dev_xl_enabled(dev, priv))909909+ can_cap_enabled(dev, CAN_CAP_XL))947910 return CANXL_MTU;948911949912 return 0;···944981 dev = dev_get_by_index(sock_net(sk), ifindex);945982 if (!dev)946983 return -ENXIO;984984+985985+ /* no sending on a CAN device in read-only mode */986986+ if (can_cap_enabled(dev, CAN_CAP_RO)) {987987+ err = -EACCES;988988+ goto put_dev;989989+ }947990948991 skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),949992 msg->msg_flags & MSG_DONTWAIT, &err);
···31513151 int err;3152315231533153 if (family == AF_INET &&31543154+ (!x->dir || x->dir == XFRM_SA_DIR_OUT) &&31543155 READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc))31553156 x->props.flags |= XFRM_STATE_NOPMTUDISC;31563157
+42
rust/helpers/bitops.c
···11// SPDX-License-Identifier: GPL-2.02233#include <linux/bitops.h>44+#include <linux/find.h>4556void rust_helper___set_bit(unsigned long nr, unsigned long *addr)67{···2221{2322 clear_bit(nr, addr);2423}2424+2525+/*2626+ * The rust_helper_ prefix is intentionally omitted below so that the2727+ * declarations in include/linux/find.h are compatible with these helpers.2828+ *2929+ * Note that the below #ifdefs mean that the helper is only created if C does3030+ * not provide a definition.3131+ */3232+#ifdef find_first_zero_bit3333+__rust_helper3434+unsigned long _find_first_zero_bit(const unsigned long *p, unsigned long size)3535+{3636+ return find_first_zero_bit(p, size);3737+}3838+#endif /* find_first_zero_bit */3939+4040+#ifdef find_next_zero_bit4141+__rust_helper4242+unsigned long _find_next_zero_bit(const unsigned long *addr,4343+ unsigned long size, unsigned long offset)4444+{4545+ return find_next_zero_bit(addr, size, offset);4646+}4747+#endif /* find_next_zero_bit */4848+4949+#ifdef find_first_bit5050+__rust_helper5151+unsigned long _find_first_bit(const unsigned long *addr, unsigned long size)5252+{5353+ return find_first_bit(addr, size);5454+}5555+#endif /* find_first_bit */5656+5757+#ifdef find_next_bit5858+__rust_helper5959+unsigned long _find_next_bit(const unsigned long *addr, unsigned long size,6060+ unsigned long offset)6161+{6262+ return find_next_bit(addr, size, offset);6363+}6464+#endif /* find_next_bit */
+3-4
rust/kernel/device.rs
···14141515#[cfg(CONFIG_PRINTK)]1616use crate::c_str;1717-use crate::str::CStrExt as _;18171918pub mod property;2019···6667///6768/// # Implementing Bus Devices6869///6969-/// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or7070-/// [`platform::Device`].7070+/// This section provides a guideline to implement bus specific devices, such as:7171+#[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")]7272+/// * [`platform::Device`]7173///7274/// A bus specific device should be defined as follows.7375///···160160///161161/// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted162162/// [`impl_device_context_deref`]: kernel::impl_device_context_deref163163-/// [`pci::Device`]: kernel::pci::Device164163/// [`platform::Device`]: kernel::platform::Device165164#[repr(transparent)]166165pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>);
+1-1
rust/kernel/device_id.rs
···1515/// # Safety1616///1717/// Implementers must ensure that `Self` is layout-compatible with [`RawDeviceId::RawType`];1818-/// i.e. it's safe to transmute to `RawDeviceId`.1818+/// i.e. it's safe to transmute to `RawType`.1919///2020/// This requirement is needed so `IdArray::new` can convert `Self` to `RawType` when building2121/// the ID table.
+3-4
rust/kernel/dma.rs
···2727/// Trait to be implemented by DMA capable bus devices.2828///2929/// The [`dma::Device`](Device) trait should be implemented by bus specific device representations,3030-/// where the underlying bus is DMA capable, such as [`pci::Device`](::kernel::pci::Device) or3131-/// [`platform::Device`](::kernel::platform::Device).3030+/// where the underlying bus is DMA capable, such as:3131+#[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")]3232+/// * [`platform::Device`](::kernel::platform::Device)3233pub trait Device: AsRef<device::Device<Core>> {3334 /// Set up the device's DMA streaming addressing capabilities.3435 ///···533532 ///534533 /// # Safety535534 ///536536- /// * Callers must ensure that the device does not read/write to/from memory while the returned537537- /// slice is live.538535 /// * Callers must ensure that this call does not race with a read or write to the same region539536 /// that overlaps with this write.540537 ///
+8-4
rust/kernel/driver.rs
···3333//! }3434//! ```3535//!3636-//! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`].3636+//! For specific examples see:3737+//!3838+//! * [`platform::Driver`](kernel::platform::Driver)3939+#"4242+)]4343+#")]3744//!3845//! The `probe()` callback should return a `impl PinInit<Self, Error>`, i.e. the driver's private3946//! data. The bus abstraction should store the pointer in the corresponding bus device. The generic···8679//!8780//! For this purpose the generic infrastructure in [`device_id`] should be used.8881//!8989-//! [`auxiliary::Driver`]: kernel::auxiliary::Driver9082//! [`Core`]: device::Core9183//! [`Device`]: device::Device9284//! [`Device<Core>`]: device::Device<device::Core>···9387//! [`DeviceContext`]: device::DeviceContext9488//! [`device_id`]: kernel::device_id9589//! [`module_driver`]: kernel::module_driver9696-//! [`pci::Driver`]: kernel::pci::Driver9797-//! [`platform::Driver`]: kernel::platform::Driver98909991use crate::error::{Error, Result};10092use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+2-2
rust/kernel/pci/io.rs
···2020///2121/// # Invariants2222///2323-/// `Bar` always holds an `IoRaw` inststance that holds a valid pointer to the start of the I/O2323+/// `Bar` always holds an `IoRaw` instance that holds a valid pointer to the start of the I/O2424/// memory mapped PCI BAR and its size.2525pub struct Bar<const SIZE: usize = 0> {2626 pdev: ARef<Device>,···5454 let ioptr: usize = unsafe { bindings::pci_iomap(pdev.as_raw(), num, 0) } as usize;5555 if ioptr == 0 {5656 // SAFETY:5757- // `pdev` valid by the invariants of `Device`.5757+ // `pdev` is valid by the invariants of `Device`.5858 // `num` is checked for validity by a previous call to `Device::resource_len`.5959 unsafe { bindings::pci_release_region(pdev.as_raw(), num) };6060 return Err(ENOMEM);
···22/*33 * Performance events:44 *55- * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de>55+ * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>66 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar77 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra88 *
+6-3
tools/net/ynl/pyynl/lib/doc_generator.py
···165165 continue166166 lines.append(self.fmt.rst_paragraph(self.fmt.bold(key), level + 1))167167 if key in ['request', 'reply']:168168- lines.append(self.parse_do_attributes(do_dict[key], level + 1) + "\n")168168+ lines.append(self.parse_op_attributes(do_dict[key], level + 1) + "\n")169169 else:170170 lines.append(self.fmt.headroom(level + 2) + do_dict[key] + "\n")171171172172 return "\n".join(lines)173173174174- def parse_do_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str:174174+ def parse_op_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str:175175 """Parse 'attributes' section"""176176 if "attributes" not in attrs:177177 return ""···183183184184 def parse_operations(self, operations: List[Dict[str, Any]], namespace: str) -> str:185185 """Parse operations block"""186186- preprocessed = ["name", "doc", "title", "do", "dump", "flags"]186186+ preprocessed = ["name", "doc", "title", "do", "dump", "flags", "event"]187187 linkable = ["fixed-header", "attribute-set"]188188 lines = []189189···216216 if "dump" in operation:217217 lines.append(self.fmt.rst_paragraph(":dump:", 0))218218 lines.append(self.parse_do(operation["dump"], 0))219219+ if "event" in operation:220220+ lines.append(self.fmt.rst_paragraph(":event:", 0))221221+ lines.append(self.parse_op_attributes(operation["event"], 0))219222220223 # New line after fields221224 lines.append("\n")
+1-1
tools/perf/builtin-list.c
···44 *55 * Builtin list command: list all event types66 *77- * Copyright (C) 2009, Thomas Gleixner <tglx@linutronix.de>77+ * Copyright (C) 2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org>88 * Copyright (C) 2008-2009, Red Hat Inc, Ingo Molnar <mingo@redhat.com>99 * Copyright (C) 2011, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com>1010 */
···4747 struct test_xdp_context_test_run *skel = NULL;4848 char data[sizeof(pkt_v4) + sizeof(__u32)];4949 char bad_ctx[sizeof(struct xdp_md) + 1];5050+ char large_data[256];5051 struct xdp_md ctx_in, ctx_out;5152 DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,5253 .data_in = &data,···9594 test_xdp_context_error(prog_fd, opts, 4, sizeof(__u32), sizeof(data),9695 0, 0, 0);97969898- /* Meta data must be 255 bytes or smaller */9999- test_xdp_context_error(prog_fd, opts, 0, 256, sizeof(data), 0, 0, 0);100100-10197 /* Total size of data must be data_end - data_meta or larger */10298 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32),10399 sizeof(data) + 1, 0, 0, 0);···113115 /* The egress cannot be specified */114116 test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32), sizeof(data),115117 0, 0, 1);118118+119119+ /* Meta data must be 216 bytes or smaller (256 - sizeof(struct120120+ * xdp_frame)). Test both nearest invalid size and nearest invalid121121+ * 4-byte-aligned size, and make sure data_in is large enough that we122122+ * actually hit the check on metadata length123123+ */124124+ opts.data_in = large_data;125125+ opts.data_size_in = sizeof(large_data);126126+ test_xdp_context_error(prog_fd, opts, 0, 217, sizeof(large_data), 0, 0, 0);127127+ test_xdp_context_error(prog_fd, opts, 0, 220, sizeof(large_data), 0, 0, 0);116128117129 test_xdp_context_test_run__destroy(skel);118130}
+2-2
tools/testing/selftests/drivers/net/hw/toeplitz.c
···485485486486 bitmap = strtoul(arg, NULL, 0);487487488488- if (bitmap & ~(RPS_MAX_CPUS - 1))489489- error(1, 0, "rps bitmap 0x%lx out of bounds 0..%lu",488488+ if (bitmap & ~((1UL << RPS_MAX_CPUS) - 1))489489+ error(1, 0, "rps bitmap 0x%lx out of bounds, max cpu %lu",490490 bitmap, RPS_MAX_CPUS - 1);491491492492 for (i = 0; i < RPS_MAX_CPUS; i++)
···9494 mask = 09595 for cpu in rps_cpus:9696 mask |= (1 << cpu)9797- mask = hex(mask)[2:]9797+9898+ mask = hex(mask)989999100 # Set RPS bitmap for all rx queues100101 for rps_file in glob.glob(f"/sys/class/net/{cfg.ifname}/queues/rx-*/rps_cpus"):101102 with open(rps_file, "w", encoding="utf-8") as fp:102102- fp.write(mask)103103+ # sysfs expects hex without '0x' prefix, toeplitz.c needs the prefix104104+ fp.write(mask[2:])103105104106 return mask105107
···8989 # The id must be four bytes, test that 3 bytes fails a write9090 if echo -n abc > ./trace_marker_raw ; then9191 echo "Too small of write expected to fail but did not"9292+ echo ${ORIG} > buffer_size_kb9293 exit_fail9394 fi9495···10099101100 if write_buffer 0xdeadbeef $size ; then102101 echo "Too big of write expected to fail but did not"102102+ echo ${ORIG} > buffer_size_kb103103 exit_fail104104 fi105105}106106107107+ORIG=`cat buffer_size_kb`108108+109109+# test_multiple_writes test needs at least 12KB buffer110110+NEW_SIZE=12111111+112112+if [ ${ORIG} -lt ${NEW_SIZE} ]; then113113+ echo ${NEW_SIZE} > buffer_size_kb114114+fi115115+107116test_buffer108108-test_multiple_writes117117+if ! test_multiple_writes; then118118+ echo ${ORIG} > buffer_size_kb119119+ exit_fail120120+fi121121+122122+echo ${ORIG} > buffer_size_kb
+85-59
tools/testing/selftests/kvm/x86/amx_test.c
···6969 : : "a"(tile), "d"(0));7070}71717272+static inline int tileloadd_safe(void *tile)7373+{7474+ return kvm_asm_safe(".byte 0xc4,0xe2,0x7b,0x4b,0x04,0x10",7575+ "a"(tile), "d"(0));7676+}7777+7278static inline void __tilerelease(void)7379{7480 asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::);···130124 }131125}132126127127+enum {128128+ /* Retrieve TMM0 from guest, stash it for TEST_RESTORE_TILEDATA */129129+ TEST_SAVE_TILEDATA = 1,130130+131131+ /* Check TMM0 against tiledata */132132+ TEST_COMPARE_TILEDATA = 2,133133+134134+ /* Restore TMM0 from earlier save */135135+ TEST_RESTORE_TILEDATA = 4,136136+137137+ /* Full VM save/restore */138138+ TEST_SAVE_RESTORE = 8,139139+};140140+133141static void __attribute__((__flatten__)) guest_code(struct tile_config *amx_cfg,134142 struct tile_data *tiledata,135143 struct xstate *xstate)136144{145145+ int vector;146146+137147 GUEST_ASSERT(this_cpu_has(X86_FEATURE_XSAVE) &&138148 this_cpu_has(X86_FEATURE_OSXSAVE));139149 check_xtile_info();140140- GUEST_SYNC(1);150150+ GUEST_SYNC(TEST_SAVE_RESTORE);141151142152 /* xfd=0, enable amx */143153 wrmsr(MSR_IA32_XFD, 0);144144- GUEST_SYNC(2);154154+ GUEST_SYNC(TEST_SAVE_RESTORE);145155 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == 0);146156 set_tilecfg(amx_cfg);147157 __ldtilecfg(amx_cfg);148148- GUEST_SYNC(3);158158+ GUEST_SYNC(TEST_SAVE_RESTORE);149159 /* Check save/restore when trap to userspace */150160 __tileloadd(tiledata);151151- GUEST_SYNC(4);161161+ GUEST_SYNC(TEST_SAVE_TILEDATA | TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE);162162+163163+ /* xfd=0x40000, disable amx tiledata */164164+ wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA);165165+166166+ /* host tries setting tiledata while guest XFD is set */167167+ GUEST_SYNC(TEST_RESTORE_TILEDATA);168168+ GUEST_SYNC(TEST_SAVE_RESTORE);169169+170170+ wrmsr(MSR_IA32_XFD, 0);152171 __tilerelease();153153- GUEST_SYNC(5);172172+ GUEST_SYNC(TEST_SAVE_RESTORE);154173 /*155174 * After XSAVEC, XTILEDATA is cleared in the xstate_bv but is set in156175 * the xcomp_bv.···184153 __xsavec(xstate, XFEATURE_MASK_XTILE_DATA);185154 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA));186155 GUEST_ASSERT(xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA);156156+157157+ /* #NM test */187158188159 /* xfd=0x40000, disable amx tiledata */189160 wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILE_DATA);···199166 GUEST_ASSERT(!(xstate->header.xstate_bv & XFEATURE_MASK_XTILE_DATA));200167 GUEST_ASSERT((xstate->header.xcomp_bv & XFEATURE_MASK_XTILE_DATA));201168202202- GUEST_SYNC(6);169169+ GUEST_SYNC(TEST_SAVE_RESTORE);203170 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);204171 set_tilecfg(amx_cfg);205172 __ldtilecfg(amx_cfg);173173+206174 /* Trigger #NM exception */207207- __tileloadd(tiledata);208208- GUEST_SYNC(10);175175+ vector = tileloadd_safe(tiledata);176176+ __GUEST_ASSERT(vector == NM_VECTOR,177177+ "Wanted #NM on tileloadd with XFD[18]=1, got %s",178178+ ex_str(vector));209179210210- GUEST_DONE();211211-}212212-213213-void guest_nm_handler(struct ex_regs *regs)214214-{215215- /* Check if #NM is triggered by XFEATURE_MASK_XTILE_DATA */216216- GUEST_SYNC(7);217180 GUEST_ASSERT(!(get_cr0() & X86_CR0_TS));218181 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA);219182 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);220220- GUEST_SYNC(8);183183+ GUEST_SYNC(TEST_SAVE_RESTORE);221184 GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILE_DATA);222185 GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILE_DATA);223186 /* Clear xfd_err */224187 wrmsr(MSR_IA32_XFD_ERR, 0);225188 /* xfd=0, enable amx */226189 wrmsr(MSR_IA32_XFD, 0);227227- GUEST_SYNC(9);190190+ GUEST_SYNC(TEST_SAVE_RESTORE);191191+192192+ __tileloadd(tiledata);193193+ GUEST_SYNC(TEST_COMPARE_TILEDATA | TEST_SAVE_RESTORE);194194+195195+ GUEST_DONE();228196}229197230198int main(int argc, char *argv[])···234200 struct kvm_vcpu *vcpu;235201 struct kvm_vm *vm;236202 struct kvm_x86_state *state;203203+ struct kvm_x86_state *tile_state = NULL;237204 int xsave_restore_size;238205 vm_vaddr_t amx_cfg, tiledata, xstate;239206 struct ucall uc;240240- u32 amx_offset;241207 int ret;242208243209 /*···262228263229 vcpu_regs_get(vcpu, ®s1);264230265265- /* Register #NM handler */266266- vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);267267-268231 /* amx cfg for guest_code */269232 amx_cfg = vm_vaddr_alloc_page(vm);270233 memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());···275244 memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));276245 vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate);277246247247+ int iter = 0;278248 for (;;) {279249 vcpu_run(vcpu);280250 TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);···285253 REPORT_GUEST_ASSERT(uc);286254 /* NOT REACHED */287255 case UCALL_SYNC:288288- switch (uc.args[1]) {289289- case 1:290290- case 2:291291- case 3:292292- case 5:293293- case 6:294294- case 7:295295- case 8:296296- fprintf(stderr, "GUEST_SYNC(%ld)\n", uc.args[1]);297297- break;298298- case 4:299299- case 10:300300- fprintf(stderr,301301- "GUEST_SYNC(%ld), check save/restore status\n", uc.args[1]);256256+ ++iter;257257+ if (uc.args[1] & TEST_SAVE_TILEDATA) {258258+ fprintf(stderr, "GUEST_SYNC #%d, save tiledata\n", iter);259259+ tile_state = vcpu_save_state(vcpu);260260+ }261261+ if (uc.args[1] & TEST_COMPARE_TILEDATA) {262262+ fprintf(stderr, "GUEST_SYNC #%d, check TMM0 contents\n", iter);302263303264 /* Compacted mode, get amx offset by xsave area304265 * size subtract 8K amx size.305266 */306306- amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE;307307- state = vcpu_save_state(vcpu);308308- void *amx_start = (void *)state->xsave + amx_offset;267267+ u32 amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE;268268+ void *amx_start = (void *)tile_state->xsave + amx_offset;309269 void *tiles_data = (void *)addr_gva2hva(vm, tiledata);310270 /* Only check TMM0 register, 1 tile */311271 ret = memcmp(amx_start, tiles_data, TILE_SIZE);312272 TEST_ASSERT(ret == 0, "memcmp failed, ret=%d", ret);273273+ }274274+ if (uc.args[1] & TEST_RESTORE_TILEDATA) {275275+ fprintf(stderr, "GUEST_SYNC #%d, before KVM_SET_XSAVE\n", iter);276276+ vcpu_xsave_set(vcpu, tile_state->xsave);277277+ fprintf(stderr, "GUEST_SYNC #%d, after KVM_SET_XSAVE\n", iter);278278+ }279279+ if (uc.args[1] & TEST_SAVE_RESTORE) {280280+ fprintf(stderr, "GUEST_SYNC #%d, save/restore VM state\n", iter);281281+ state = vcpu_save_state(vcpu);282282+ memset(®s1, 0, sizeof(regs1));283283+ vcpu_regs_get(vcpu, ®s1);284284+285285+ kvm_vm_release(vm);286286+287287+ /* Restore state in a new VM. */288288+ vcpu = vm_recreate_with_one_vcpu(vm);289289+ vcpu_load_state(vcpu, state);313290 kvm_x86_state_cleanup(state);314314- break;315315- case 9:316316- fprintf(stderr,317317- "GUEST_SYNC(%ld), #NM exception and enable amx\n", uc.args[1]);318318- break;291291+292292+ memset(®s2, 0, sizeof(regs2));293293+ vcpu_regs_get(vcpu, ®s2);294294+ TEST_ASSERT(!memcmp(®s1, ®s2, sizeof(regs2)),295295+ "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx",296296+ (ulong) regs2.rdi, (ulong) regs2.rsi);319297 }320298 break;321299 case UCALL_DONE:···335293 TEST_FAIL("Unknown ucall %lu", uc.cmd);336294 }337295338338- state = vcpu_save_state(vcpu);339339- memset(®s1, 0, sizeof(regs1));340340- vcpu_regs_get(vcpu, ®s1);341341-342342- kvm_vm_release(vm);343343-344344- /* Restore state in a new VM. */345345- vcpu = vm_recreate_with_one_vcpu(vm);346346- vcpu_load_state(vcpu, state);347347- kvm_x86_state_cleanup(state);348348-349349- memset(®s2, 0, sizeof(regs2));350350- vcpu_regs_get(vcpu, ®s2);351351- TEST_ASSERT(!memcmp(®s1, ®s2, sizeof(regs2)),352352- "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx",353353- (ulong) regs2.rdi, (ulong) regs2.rsi);354296 }355297done:356298 kvm_vm_free(vm);
+12
tools/testing/vsock/util.c
···511511512512 printf("ok\n");513513 }514514+515515+ printf("All tests have been executed. Waiting other peer...");516516+ fflush(stdout);517517+518518+ /*519519+ * Final full barrier, to ensure that all tests have been run and520520+ * that even the last one has been successful on both sides.521521+ */522522+ control_writeln("COMPLETED");523523+ control_expectln("COMPLETED");524524+525525+ printf("ok\n");514526}515527516528void list_tests(const struct test_case *test_cases)