···14581458=========================1459145914601460Writing 1 to this entry will disable unprivileged calls to ``bpf()``;14611461-once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` will return14621462-``-EPERM``.14611461+once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` or ``CAP_BPF``14621462+will return ``-EPERM``. Once set to 1, this can't be cleared from the14631463+running kernel anymore.1463146414641464-Once set, this can't be cleared.14651465+Writing 2 to this entry will also disable unprivileged calls to ``bpf()``,14661466+however, an admin can still change this setting later on, if needed, by14671467+writing 0 or 1 to this entry.1465146814691469+If ``BPF_UNPRIV_DEFAULT_OFF`` is enabled in the kernel config, then this14701470+entry will default to 2 instead of 0.14711471+14721472+= =============================================================14731473+0 Unprivileged calls to ``bpf()`` are enabled14741474+1 Unprivileged calls to ``bpf()`` are disabled without recovery14751475+2 Unprivileged calls to ``bpf()`` are disabled14761476+= =============================================================1466147714671478watchdog14681479========
···4646 description: |4747 I2C bus timeout in microseconds48484949+ fsl,i2c-erratum-a004447:5050+ $ref: /schemas/types.yaml#/definitions/flag5151+ description: |5252+ Indicates the presence of QorIQ erratum A-004447, which5353+ says that the standard i2c recovery scheme mechanism does5454+ not work and an alternate implementation is needed.5555+4956required:5057 - compatible5158 - reg
···1010 - $ref: ethernet-controller.yaml#11111212maintainers:1313- - Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>1313+ - Sergei Shtylyov <sergei.shtylyov@gmail.com>14141515properties:1616 compatible:
+2
Documentation/driver-api/usb/usb.rst
···123123In addition, some functions useful for creating debugging output are124124defined in ``drivers/usb/common/debug.c``.125125126126+.. _usb_header:127127+126128Host-Side Data Types and Macros127129===============================128130
+8-8
Documentation/userspace-api/seccomp_filter.rst
···250250seccomp notification fd to receive a ``struct seccomp_notif``, which contains251251five members: the input length of the structure, a unique-per-filter ``id``,252252the ``pid`` of the task which triggered this request (which may be 0 if the253253-task is in a pid ns not visible from the listener's pid namespace), a ``flags``254254-member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing255255-whether or not the notification is a result of a non-fatal signal, and the256256-``data`` passed to seccomp. Userspace can then make a decision based on this257257-information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a258258-response, indicating what should be returned to userspace. The ``id`` member of259259-``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct260260-seccomp_notif``.253253+task is in a pid ns not visible from the listener's pid namespace). The254254+notification also contains the ``data`` passed to seccomp, and a filters flag.255255+The structure should be zeroed out prior to calling the ioctl.256256+257257+Userspace can then make a decision based on this information about what to do,258258+and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be259259+returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should260260+be the same ``id`` as in ``struct seccomp_notif``.261261262262It is worth noting that ``struct seccomp_data`` contains the values of register263263arguments to the syscall, but does not contain pointers to memory. The task's
+5-3
Documentation/virt/kvm/vcpu-requests.rst
···118118 necessary to inform each VCPU to completely refresh the tables. This119119 request is used for that.120120121121-KVM_REQ_PENDING_TIMER121121+KVM_REQ_UNBLOCK122122123123- This request may be made from a timer handler run on the host on behalf124124- of a VCPU. It informs the VCPU thread to inject a timer interrupt.123123+ This request informs the vCPU to exit kvm_vcpu_block. It is used for124124+ example from timer handlers that run on the host on behalf of a vCPU,125125+ or in order to update the interrupt routing and ensure that assigned126126+ devices will wake up the vCPU.125127126128KVM_REQ_UNHALT127129
+19-8
MAINTAINERS
···41384138F: Documentation/devicetree/bindings/arm/cavium-thunder2.txt41394139F: arch/arm64/boot/dts/cavium/thunder2-99xx*4140414041414141+CBS/ETF/TAPRIO QDISCS41424142+M: Vinicius Costa Gomes <vinicius.gomes@intel.com>41434143+S: Maintained41444144+L: netdev@vger.kernel.org41454145+F: net/sched/sch_cbs.c41464146+F: net/sched/sch_etf.c41474147+F: net/sched/sch_taprio.c41484148+41414149CC2520 IEEE-802.15.4 RADIO DRIVER41424150M: Varka Bhadram <varkabhadram@gmail.com>41434151L: linux-wpan@vger.kernel.org···5577556955785570DPAA2 ETHERNET DRIVER55795571M: Ioana Ciornei <ioana.ciornei@nxp.com>55805580-M: Ioana Radulescu <ruxandra.radulescu@nxp.com>55815572L: netdev@vger.kernel.org55825573S: Maintained55835574F: Documentation/networking/device_drivers/ethernet/freescale/dpaa2/ethernet-driver.rst···69456938FANOTIFY69466939M: Jan Kara <jack@suse.cz>69476940R: Amir Goldstein <amir73il@gmail.com>69416941+R: Matthew Bobrowski <repnop@google.com>69486942L: linux-fsdevel@vger.kernel.org69496943S: Maintained69506944F: fs/notify/fanotify/···1268912681F: include/linux/mfd/ntxec.h12690126821269112683NETRONOME ETHERNET DRIVERS1269212692-M: Simon Horman <simon.horman@netronome.com>1268412684+M: Simon Horman <simon.horman@corigine.com>1269312685R: Jakub Kicinski <kuba@kernel.org>1269412694-L: oss-drivers@netronome.com1268612686+L: oss-drivers@corigine.com1269512687S: Maintained1269612688F: drivers/net/ethernet/netronome/1269712689···1271812710M: Jakub Kicinski <kuba@kernel.org>1271912711L: netdev@vger.kernel.org1272012712S: Maintained1272112721-W: http://www.linuxfoundation.org/en/Net1272212713Q: https://patchwork.kernel.org/project/netdevbpf/list/1272312714T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git1272412715T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git···1276212755M: Jakub Kicinski <kuba@kernel.org>1276312756L: netdev@vger.kernel.org1276412757S: Maintained1276512765-W: http://www.linuxfoundation.org/en/Net1276612758Q: https://patchwork.kernel.org/project/netdevbpf/list/1276712759B: mailto:netdev@vger.kernel.org1276812760T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git···1290312897F: net/ipv4/nexthop.c12904128981290512899NFC SUBSYSTEM1290012900+M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>1290112901+L: linux-nfc@lists.01.org (moderated for non-subscribers)1290612902L: netdev@vger.kernel.org1290712907-S: Orphan1290312903+S: Maintained1290812904F: Documentation/devicetree/bindings/net/nfc/1290912905F: drivers/nfc/1291012906F: include/linux/platform_data/nfcmrvl.h···1321413206F: sound/soc/codecs/tfa9879*13215132071321613208NXP-NCI NFC DRIVER1321713217-M: Clément Perrochaud <clement.perrochaud@effinnov.com>1321813209R: Charles Gorand <charles.gorand@effinnov.com>1321913210L: linux-nfc@lists.01.org (moderated for non-subscribers)1322013211S: Supported···1432514318M: Dennis Zhou <dennis@kernel.org>1432614319M: Tejun Heo <tj@kernel.org>1432714320M: Christoph Lameter <cl@linux.com>1432114321+L: linux-mm@kvack.org1432814322S: Maintained1432914323T: git git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git1433014324F: arch/*/include/asm/percpu.h1433114325F: include/linux/percpu*.h1432614326+F: lib/percpu*.c1433214327F: mm/percpu*.c14333143281433414329PER-TASK DELAY ACCOUNTING···1595415945M: Julian Wiedmann <jwi@linux.ibm.com>1595515946M: Karsten Graul <kgraul@linux.ibm.com>1595615947L: linux-s390@vger.kernel.org1594815948+L: netdev@vger.kernel.org1595715949S: Supported1595815950W: http://www.ibm.com/developerworks/linux/linux390/1595915951F: drivers/s390/net/*iucv*···1596515955M: Julian Wiedmann <jwi@linux.ibm.com>1596615956M: Karsten Graul <kgraul@linux.ibm.com>1596715957L: linux-s390@vger.kernel.org1595815958+L: netdev@vger.kernel.org1596815959S: Supported1596915960W: http://www.ibm.com/developerworks/linux/linux390/1597015961F: drivers/s390/net/···1767917668L: linux-i2c@vger.kernel.org1768017669S: Maintained1768117670F: drivers/i2c/busses/i2c-designware-*1768217682-F: include/linux/platform_data/i2c-designware.h17683176711768417672SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER1768517673M: Jaehoon Chung <jh80.chung@samsung.com>···2001420004F: drivers/xen/*swiotlb*20015200052001620006XFS FILESYSTEM2000720007+C: irc://irc.oftc.net/xfs2001720008M: Darrick J. Wong <djwong@kernel.org>2001820009M: linux-xfs@vger.kernel.org2001920010L: linux-xfs@vger.kernel.org
+6-1
Makefile
···22VERSION = 533PATCHLEVEL = 1344SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc466NAME = Frozen Wasteland7788# *DOCUMENTATION*···928928929929# Limit inlining across translation units to reduce binary size930930KBUILD_LDFLAGS += -mllvm -import-instr-limit=5931931+932932+# Check for frame size exceeding threshold during prolog/epilog insertion.933933+ifneq ($(CONFIG_FRAME_WARN),0)934934+KBUILD_LDFLAGS += -plugin-opt=-warn-stack-size=$(CONFIG_FRAME_WARN)935935+endif931936endif932937933938ifdef CONFIG_LTO
···7474 * This insanity brought to you by speculative system register reads,7575 * out-of-order memory accesses, sequence locks and Thomas Gleixner.7676 *7777- * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html7777+ * https://lore.kernel.org/r/alpine.DEB.2.21.1902081950260.1662@nanos.tec.linutronix.de/7878 */7979#define arch_counter_enforce_ordering(val) do { \8080 u64 tmp, _val = (val); \
···720720 return ret;721721 }722722723723- if (run->immediate_exit)724724- return -EINTR;725725-726723 vcpu_load(vcpu);724724+725725+ if (run->immediate_exit) {726726+ ret = -EINTR;727727+ goto out;728728+ }727729728730 kvm_sigset_activate(vcpu);729731···898896 }899897900898 kvm_sigset_deactivate(vcpu);899899+900900+out:901901+ /*902902+ * In the unlikely event that we are returning to userspace903903+ * with pending exceptions or PC adjustment, commit these904904+ * adjustments in order to give userspace a consistent view of905905+ * the vcpu state. Note that this relies on __kvm_adjust_pc()906906+ * being preempt-safe on VHE.907907+ */908908+ if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION |909909+ KVM_ARM64_INCREMENT_PC)))910910+ kvm_call_hyp(__kvm_adjust_pc, vcpu);901911902912 vcpu_put(vcpu);903913 return ret;
+17-1
arch/arm64/kvm/hyp/exception.c
···296296 *vcpu_pc(vcpu) = vect_offset;297297}298298299299-void kvm_inject_exception(struct kvm_vcpu *vcpu)299299+static void kvm_inject_exception(struct kvm_vcpu *vcpu)300300{301301 if (vcpu_el1_is_32bit(vcpu)) {302302 switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {···327327 */328328 break;329329 }330330+ }331331+}332332+333333+/*334334+ * Adjust the guest PC (and potentially exception state) depending on335335+ * flags provided by the emulation code.336336+ */337337+void __kvm_adjust_pc(struct kvm_vcpu *vcpu)338338+{339339+ if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {340340+ kvm_inject_exception(vcpu);341341+ vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |342342+ KVM_ARM64_EXCEPT_MASK);343343+ } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {344344+ kvm_skip_instr(vcpu);345345+ vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;330346 }331347}
-18
arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
···1313#include <asm/kvm_emulate.h>1414#include <asm/kvm_host.h>15151616-void kvm_inject_exception(struct kvm_vcpu *vcpu);1717-1816static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)1917{2018 if (vcpu_mode_is_32bit(vcpu)) {···39414042 write_sysreg_el2(vcpu_gp_regs(vcpu)->pstate, SYS_SPSR);4143 write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);4242-}4343-4444-/*4545- * Adjust the guest PC on entry, depending on flags provided by EL14646- * for the purpose of emulation (MMIO, sysreg) or exception injection.4747- */4848-static inline void __adjust_pc(struct kvm_vcpu *vcpu)4949-{5050- if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {5151- kvm_inject_exception(vcpu);5252- vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |5353- KVM_ARM64_EXCEPT_MASK);5454- } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {5555- kvm_skip_instr(vcpu);5656- vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;5757- }5844}59456046/*
···2323extern unsigned long hyp_nr_cpus;2424struct host_kvm host_kvm;25252626-struct hyp_pool host_s2_mem;2727-struct hyp_pool host_s2_dev;2626+static struct hyp_pool host_s2_mem;2727+static struct hyp_pool host_s2_dev;28282929/*3030 * Copies of the host's CPU features registers holding sanitized values.
+1-1
arch/arm64/kvm/hyp/nvhe/setup.c
···1717#include <nvhe/trap_handler.h>18181919struct hyp_pool hpool;2020-struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops;2120unsigned long hyp_nr_cpus;22212322#define hyp_percpu_size ((unsigned long)__per_cpu_end - \···2627static void *hyp_pgt_base;2728static void *host_s2_mem_pgt_base;2829static void *host_s2_dev_pgt_base;3030+static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops;29313032static int divide_memory_pool(void *virt, unsigned long size)3133{
+1-2
arch/arm64/kvm/hyp/nvhe/switch.c
···44 * Author: Marc Zyngier <marc.zyngier@arm.com>55 */6677-#include <hyp/adjust_pc.h>87#include <hyp/switch.h>98#include <hyp/sysreg-sr.h>109···200201 */201202 __debug_save_host_buffers_nvhe(vcpu);202203203203- __adjust_pc(vcpu);204204+ __kvm_adjust_pc(vcpu);204205205206 /*206207 * We must restore the 32-bit state before the sysregs, thanks
···5151/* PPC-specific vcpu->requests bit members */5252#define KVM_REQ_WATCHDOG KVM_ARCH_REQ(0)5353#define KVM_REQ_EPR_EXIT KVM_ARCH_REQ(1)5454+#define KVM_REQ_PENDING_TIMER KVM_ARCH_REQ(2)54555556#include <linux/mmu_notifier.h>5657
+1-1
arch/powerpc/kvm/book3s_hv.c
···39363936 break;39373937 }39383938 cur = ktime_get();39393939- } while (single_task_running() && ktime_before(cur, stop));39393939+ } while (kvm_vcpu_can_poll(cur, stop));3940394039413941 spin_lock(&vc->lock);39423942 vc->vcore_state = VCORE_INACTIVE;
+1
arch/riscv/Kconfig
···3434 select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX3535 select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT3636 select ARCH_SUPPORTS_HUGETLBFS if MMU3737+ select ARCH_USE_MEMTEST3738 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU3839 select ARCH_WANT_FRAME_POINTERS3940 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
+2-2
arch/riscv/include/asm/kexec.h
···4242 unsigned long fdt_addr;4343};44444545-const extern unsigned char riscv_kexec_relocate[];4646-const extern unsigned int riscv_kexec_relocate_size;4545+extern const unsigned char riscv_kexec_relocate[];4646+extern const unsigned int riscv_kexec_relocate_size;47474848typedef void (*riscv_kexec_method)(unsigned long first_ind_entry,4949 unsigned long jump_addr,
+6-5
arch/riscv/kernel/machine_kexec.c
···1414#include <asm/set_memory.h> /* For set_memory_x() */1515#include <linux/compiler.h> /* For unreachable() */1616#include <linux/cpu.h> /* For cpu_down() */1717+#include <linux/reboot.h>17181818-/**1919+/*1920 * kexec_image_info - Print received image details2021 */2122static void···4039 }4140}42414343-/**4242+/*4443 * machine_kexec_prepare - Initialize kexec4544 *4645 * This function is called from do_kexec_load, when the user has···101100}102101103102104104-/**103103+/*105104 * machine_kexec_cleanup - Cleanup any leftovers from106105 * machine_kexec_prepare107106 *···136135#endif137136}138137139139-/**138138+/*140139 * machine_crash_shutdown - Prepare to kexec after a kernel crash141140 *142141 * This function is called by crash_kexec just before machine_kexec···152151 pr_info("Starting crashdump kernel...\n");153152}154153155155-/**154154+/*156155 * machine_kexec - Jump to the loaded kimage157156 *158157 * This function is called by kernel_kexec which is called by the
···51115111 return rc;51125112}5113511351145114-int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)51145114+int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len, int emulation_type)51155115{51165116 int rc = X86EMUL_CONTINUE;51175117 int mode = ctxt->mode;···5322532253235323 ctxt->execute = opcode.u.execute;5324532453255325- if (unlikely(ctxt->ud) && likely(!(ctxt->d & EmulateOnUD)))53255325+ if (unlikely(emulation_type & EMULTYPE_TRAP_UD) &&53265326+ likely(!(ctxt->d & EmulateOnUD)))53265327 return EMULATION_FAILED;5327532853285329 if (unlikely(ctxt->d &
+8
arch/x86/kvm/hyperv.c
···11721172{11731173 struct kvm_hv *hv = to_kvm_hv(kvm);11741174 u64 gfn;11751175+ int idx;1175117611761177 if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||11771178 hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET ||···11911190 gfn = hv->hv_tsc_page >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT;1192119111931192 hv->tsc_ref.tsc_sequence = 0;11931193+11941194+ /*11951195+ * Take the srcu lock as memslots will be accessed to check the gfn11961196+ * cache generation against the memslots generation.11971197+ */11981198+ idx = srcu_read_lock(&kvm->srcu);11941199 if (kvm_write_guest(kvm, gfn_to_gpa(gfn),11951200 &hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence)))11961201 hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN;12021202+ srcu_read_unlock(&kvm->srcu, idx);1197120311981204out_unlock:11991205 mutex_unlock(&hv->hv_lock);
+1-2
arch/x86/kvm/kvm_emulate.h
···314314 int interruptibility;315315316316 bool perm_ok; /* do not check permissions if true */317317- bool ud; /* inject an #UD if host doesn't support insn */318317 bool tf; /* TF value before instruction (after for syscall/sysret) */319318320319 bool have_exception;···490491#define X86EMUL_MODE_HOST X86EMUL_MODE_PROT64491492#endif492493493493-int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len);494494+int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len, int emulation_type);494495bool x86_page_table_writing_insn(struct x86_emulate_ctxt *ctxt);495496#define EMULATION_FAILED -1496497#define EMULATION_OK 0
+12-4
arch/x86/kvm/lapic.c
···15981598 guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());15991599 apic->lapic_timer.advance_expire_delta = guest_tsc - tsc_deadline;1600160016011601+ if (lapic_timer_advance_dynamic) {16021602+ adjust_lapic_timer_advance(vcpu, apic->lapic_timer.advance_expire_delta);16031603+ /*16041604+ * If the timer fired early, reread the TSC to account for the16051605+ * overhead of the above adjustment to avoid waiting longer16061606+ * than is necessary.16071607+ */16081608+ if (guest_tsc < tsc_deadline)16091609+ guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());16101610+ }16111611+16011612 if (guest_tsc < tsc_deadline)16021613 __wait_lapic_expire(vcpu, tsc_deadline - guest_tsc);16031603-16041604- if (lapic_timer_advance_dynamic)16051605- adjust_lapic_timer_advance(vcpu, apic->lapic_timer.advance_expire_delta);16061614}1607161516081616void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu)···16691661 }1670166216711663 atomic_inc(&apic->lapic_timer.pending);16721672- kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu);16641664+ kvm_make_request(KVM_REQ_UNBLOCK, vcpu);16731665 if (from_timer_fn)16741666 kvm_vcpu_kick(vcpu);16751667}
+3-3
arch/x86/kvm/mmu/tdp_mmu.c
···11921192}1193119311941194/*11951195- * Remove write access from all the SPTEs mapping GFNs [start, end). If11961196- * skip_4k is set, SPTEs that map 4k pages, will not be write-protected.11971197- * Returns true if an SPTE has been changed and the TLBs need to be flushed.11951195+ * Remove write access from all SPTEs at or above min_level that map GFNs11961196+ * [start, end). Returns true if an SPTE has been changed and the TLBs need to11971197+ * be flushed.11981198 */11991199static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,12001200 gfn_t start, gfn_t end, int min_level)
···238238239239240240/*241241+ * Bail out of the block loop if the VM has an assigned242242+ * device, but the blocking vCPU didn't reconfigure the243243+ * PI.NV to the wakeup vector, i.e. the assigned device244244+ * came along after the initial check in pi_pre_block().245245+ */246246+void vmx_pi_start_assignment(struct kvm *kvm)247247+{248248+ if (!irq_remapping_cap(IRQ_POSTING_CAP))249249+ return;250250+251251+ kvm_make_all_cpus_request(kvm, KVM_REQ_UNBLOCK);252252+}253253+254254+/*241255 * pi_update_irte - set IRTE for Posted-Interrupts242256 *243257 * @kvm: kvm
···504504 */505505static void acpi_pm_end(void)506506{507507- acpi_turn_off_unused_power_resources();507507+ acpi_turn_off_unused_power_resources(false);508508 acpi_scan_lock_release();509509 /*510510 * This is necessary in case acpi_pm_finish() is not called during a
+36-32
drivers/base/core.c
···194194{195195 return srcu_read_lock_held(&device_links_srcu);196196}197197+198198+static void device_link_synchronize_removal(void)199199+{200200+ synchronize_srcu(&device_links_srcu);201201+}202202+203203+static void device_link_remove_from_lists(struct device_link *link)204204+{205205+ list_del_rcu(&link->s_node);206206+ list_del_rcu(&link->c_node);207207+}197208#else /* !CONFIG_SRCU */198209static DECLARE_RWSEM(device_links_lock);199210···235224 return lockdep_is_held(&device_links_lock);236225}237226#endif227227+228228+static inline void device_link_synchronize_removal(void)229229+{230230+}231231+232232+static void device_link_remove_from_lists(struct device_link *link)233233+{234234+ list_del(&link->s_node);235235+ list_del(&link->c_node);236236+}238237#endif /* !CONFIG_SRCU */239238240239static bool device_is_ancestor(struct device *dev, struct device *target)···466445};467446ATTRIBUTE_GROUPS(devlink);468447469469-static void device_link_free(struct device_link *link)448448+static void device_link_release_fn(struct work_struct *work)470449{450450+ struct device_link *link = container_of(work, struct device_link, rm_work);451451+452452+ /* Ensure that all references to the link object have been dropped. */453453+ device_link_synchronize_removal();454454+471455 while (refcount_dec_not_one(&link->rpm_active))472456 pm_runtime_put(link->supplier);473457···481455 kfree(link);482456}483457484484-#ifdef CONFIG_SRCU485485-static void __device_link_free_srcu(struct rcu_head *rhead)486486-{487487- device_link_free(container_of(rhead, struct device_link, rcu_head));488488-}489489-490458static void devlink_dev_release(struct device *dev)491459{492460 struct device_link *link = to_devlink(dev);493461494494- call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu);462462+ INIT_WORK(&link->rm_work, device_link_release_fn);463463+ /*464464+ * It may take a while to complete this work because of the SRCU465465+ * synchronization in device_link_release_fn() and if the consumer or466466+ * supplier devices get deleted when it runs, so put it into the "long"467467+ * workqueue.468468+ */469469+ queue_work(system_long_wq, &link->rm_work);495470}496496-#else497497-static void devlink_dev_release(struct device *dev)498498-{499499- device_link_free(to_devlink(dev));500500-}501501-#endif502471503472static struct class devlink_class = {504473 .name = "devlink",···867846}868847EXPORT_SYMBOL_GPL(device_link_add);869848870870-#ifdef CONFIG_SRCU871849static void __device_link_del(struct kref *kref)872850{873851 struct device_link *link = container_of(kref, struct device_link, kref);···876856877857 pm_runtime_drop_link(link);878858879879- list_del_rcu(&link->s_node);880880- list_del_rcu(&link->c_node);859859+ device_link_remove_from_lists(link);881860 device_unregister(&link->link_dev);882861}883883-#else /* !CONFIG_SRCU */884884-static void __device_link_del(struct kref *kref)885885-{886886- struct device_link *link = container_of(kref, struct device_link, kref);887887-888888- dev_info(link->consumer, "Dropping the link to %s\n",889889- dev_name(link->supplier));890890-891891- pm_runtime_drop_link(link);892892-893893- list_del(&link->s_node);894894- list_del(&link->c_node);895895- device_unregister(&link->link_dev);896896-}897897-#endif /* !CONFIG_SRCU */898862899863static void device_link_put_kref(struct device_link *link)900864{
···321321 struct amdgpu_device *adev = (struct amdgpu_device *)handle;322322 int i;323323324324+ cancel_delayed_work_sync(&adev->vcn.idle_work);325325+324326 for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {325327 if (adev->vcn.harvest_config & (1 << i))326328 continue;
+2-3
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
···372372static int vcn_v3_0_hw_fini(void *handle)373373{374374 struct amdgpu_device *adev = (struct amdgpu_device *)handle;375375- struct amdgpu_ring *ring;376375 int i;376376+377377+ cancel_delayed_work_sync(&adev->vcn.idle_work);377378378379 for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {379380 if (adev->vcn.harvest_config & (1 << i))380381 continue;381381-382382- ring = &adev->vcn.inst[i].ring_dec;383382384383 if (!amdgpu_sriov_vf(adev)) {385384 if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
+9
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
···2925292529262926static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)29272927{29282928+ struct smu_table_context *table_context = &smu->smu_table;29292929+ PPTable_t *smc_pptable = table_context->driver_pptable;29282930 struct amdgpu_device *adev = smu->adev;29292931 uint32_t param = 0;2930293229312933 /* Navi12 does not support this */29322934 if (adev->asic_type == CHIP_NAVI12)29352935+ return 0;29362936+29372937+ /*29382938+ * Skip the MGpuFanBoost setting for those ASICs29392939+ * which do not support it29402940+ */29412941+ if (!smc_pptable->MGpuFanBoostLimitRpm)29332942 return 0;2934294329352944 /* Workaround for WS SKU */
···3027302730283028static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)30293029{30303030+ struct smu_table_context *table_context = &smu->smu_table;30313031+ PPTable_t *smc_pptable = table_context->driver_pptable;30323032+30333033+ /*30343034+ * Skip the MGpuFanBoost setting for those ASICs30353035+ * which do not support it30363036+ */30373037+ if (!smc_pptable->MGpuFanBoostLimitRpm)30383038+ return 0;30393039+30303040 return smu_cmn_send_smc_msg_with_param(smu,30313041 SMU_MSG_SetMGpuFanBoostLimitRpm,30323042 0,
···128128 return drm_dp_dpcd_write(&intel_dp->aux, DP_PHY_REPEATER_MODE, &val, 1) == 1;129129}130130131131-/**132132- * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode133133- * @intel_dp: Intel DP struct134134- *135135- * Read the LTTPR common and DPRX capabilities and switch to non-transparent136136- * link training mode if any is detected and read the PHY capabilities for all137137- * detected LTTPRs. In case of an LTTPR detection error or if the number of138138- * LTTPRs is more than is supported (8), fall back to the no-LTTPR,139139- * transparent mode link training mode.140140- *141141- * Returns:142142- * >0 if LTTPRs were detected and the non-transparent LT mode was set. The143143- * DPRX capabilities are read out.144144- * 0 if no LTTPRs or more than 8 LTTPRs were detected or in case of a145145- * detection failure and the transparent LT mode was set. The DPRX146146- * capabilities are read out.147147- * <0 Reading out the DPRX capabilities failed.148148- */149149-int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp)131131+static int intel_dp_init_lttpr(struct intel_dp *intel_dp)150132{151133 int lttpr_count;152152- bool ret;153134 int i;154135155155- ret = intel_dp_read_lttpr_common_caps(intel_dp);156156-157157- /* The DPTX shall read the DPRX caps after LTTPR detection. */158158- if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) {159159- intel_dp_reset_lttpr_common_caps(intel_dp);160160- return -EIO;161161- }162162-163163- if (!ret)136136+ if (!intel_dp_read_lttpr_common_caps(intel_dp))164137 return 0;165165-166166- /*167167- * The 0xF0000-0xF02FF range is only valid if the DPCD revision is168168- * at least 1.4.169169- */170170- if (intel_dp->dpcd[DP_DPCD_REV] < 0x14) {171171- intel_dp_reset_lttpr_common_caps(intel_dp);172172- return 0;173173- }174138175139 lttpr_count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps);176140 /*···172208173209 for (i = 0; i < lttpr_count; i++)174210 intel_dp_read_lttpr_phy_caps(intel_dp, DP_PHY_LTTPR(i));211211+212212+ return lttpr_count;213213+}214214+215215+/**216216+ * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode217217+ * @intel_dp: Intel DP struct218218+ *219219+ * Read the LTTPR common and DPRX capabilities and switch to non-transparent220220+ * link training mode if any is detected and read the PHY capabilities for all221221+ * detected LTTPRs. In case of an LTTPR detection error or if the number of222222+ * LTTPRs is more than is supported (8), fall back to the no-LTTPR,223223+ * transparent mode link training mode.224224+ *225225+ * Returns:226226+ * >0 if LTTPRs were detected and the non-transparent LT mode was set. The227227+ * DPRX capabilities are read out.228228+ * 0 if no LTTPRs or more than 8 LTTPRs were detected or in case of a229229+ * detection failure and the transparent LT mode was set. The DPRX230230+ * capabilities are read out.231231+ * <0 Reading out the DPRX capabilities failed.232232+ */233233+int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp)234234+{235235+ int lttpr_count = intel_dp_init_lttpr(intel_dp);236236+237237+ /* The DPTX shall read the DPRX caps after LTTPR detection. */238238+ if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) {239239+ intel_dp_reset_lttpr_common_caps(intel_dp);240240+ return -EIO;241241+ }175242176243 return lttpr_count;177244}
···647647648648config I2C_HISI649649 tristate "HiSilicon I2C controller"650650- depends on ARM64 || COMPILE_TEST650650+ depends on (ARM64 && ACPI) || COMPILE_TEST651651 help652652 Say Y here if you want to have Hisilicon I2C controller support653653 available on the Kunpeng Server.
+1-1
drivers/i2c/busses/i2c-ali1563.c
···11// SPDX-License-Identifier: GPL-2.0-only22-/**22+/*33 * i2c-ali1563.c - i2c driver for the ALi 1563 Southbridge44 *55 * Copyright (C) 2004 Patrick Mochel
+1-1
drivers/i2c/busses/i2c-cadence.c
···144144};145145146146/**147147- * enum cdns_i2c_slave_mode - Slave state when I2C is operating in slave mode147147+ * enum cdns_i2c_slave_state - Slave state when I2C is operating in slave mode148148 *149149 * @CDNS_I2C_SLAVE_STATE_IDLE: I2C slave idle150150 * @CDNS_I2C_SLAVE_STATE_SEND: I2C slave sending data to master
+1-1
drivers/i2c/busses/i2c-designware-master.c
···165165}166166167167/**168168- * i2c_dw_init() - Initialize the designware I2C master hardware168168+ * i2c_dw_init_master() - Initialize the designware I2C master hardware169169 * @dev: device private data170170 *171171 * This functions configures and enables the I2C master.
+2-1
drivers/i2c/busses/i2c-eg20t.c
···148148149149/**150150 * struct adapter_info - This structure holds the adapter information for the151151- PCH i2c controller151151+ * PCH i2c controller152152 * @pch_data: stores a list of i2c_algo_pch_data153153 * @pch_i2c_suspended: specifies whether the system is suspended or not154154 * perhaps with more lines and words.···358358/**359359 * pch_i2c_writebytes() - write data to I2C bus in normal mode360360 * @i2c_adap: Pointer to the struct i2c_adapter.361361+ * @msgs: Pointer to the i2c message structure.361362 * @last: specifies whether last message or not.362363 * In the case of compound mode it will be 1 for last message,363364 * otherwise 0.
+2-4
drivers/i2c/busses/i2c-i801.c
···395395 dev_err(&priv->pci_dev->dev, "Transaction timeout\n");396396 /* try to stop the current command */397397 dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");398398- outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL,399399- SMBHSTCNT(priv));398398+ outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv));400399 usleep_range(1000, 2000);401401- outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL),402402- SMBHSTCNT(priv));400400+ outb_p(0, SMBHSTCNT(priv));403401404402 /* Check if it worked */405403 status = inb_p(SMBHSTSTS(priv));
···159159 * @clk_freq: clock frequency for the operation mode160160 * @tft: Tx FIFO Threshold in bytes161161 * @rft: Rx FIFO Threshold in bytes162162- * @timeout Slave response timeout (ms)162162+ * @timeout: Slave response timeout (ms)163163 * @sm: speed mode164164 * @stop: stop condition.165165 * @xfer_complete: acknowledge completion for a I2C message.
+4-4
drivers/i2c/busses/i2c-ocores.c
···250250}251251252252/**253253- * Process timeout event253253+ * ocores_process_timeout() - Process timeout event254254 * @i2c: ocores I2C device instance255255 */256256static void ocores_process_timeout(struct ocores_i2c *i2c)···264264}265265266266/**267267- * Wait until something change in a given register267267+ * ocores_wait() - Wait until something change in a given register268268 * @i2c: ocores I2C device instance269269 * @reg: register to query270270 * @mask: bitmask to apply on register value···296296}297297298298/**299299- * Wait until is possible to process some data299299+ * ocores_poll_wait() - Wait until is possible to process some data300300 * @i2c: ocores I2C device instance301301 *302302 * Used when the device is in polling mode (interrupts disabled).···334334}335335336336/**337337- * It handles an IRQ-less transfer337337+ * ocores_process_polling() - It handles an IRQ-less transfer338338 * @i2c: ocores I2C device instance339339 *340340 * Even if IRQ are disabled, the I2C OpenCore IP behavior is exactly the same
+4-4
drivers/i2c/busses/i2c-pnx.c
···138138/**139139 * i2c_pnx_start - start a device140140 * @slave_addr: slave address141141- * @adap: pointer to adapter structure141141+ * @alg_data: pointer to local driver data structure142142 *143143 * Generate a START signal in the desired mode.144144 */···194194195195/**196196 * i2c_pnx_stop - stop a device197197- * @adap: pointer to I2C adapter structure197197+ * @alg_data: pointer to local driver data structure198198 *199199 * Generate a STOP signal to terminate the master transaction.200200 */···223223224224/**225225 * i2c_pnx_master_xmit - transmit data to slave226226- * @adap: pointer to I2C adapter structure226226+ * @alg_data: pointer to local driver data structure227227 *228228 * Sends one byte of data to the slave229229 */···293293294294/**295295 * i2c_pnx_master_rcv - receive data from slave296296- * @adap: pointer to I2C adapter structure296296+ * @alg_data: pointer to local driver data structure297297 *298298 * Reads one byte data from the slave299299 */
···480480 * forces us to send a new START481481 * when we change direction482482 */483483+ dev_dbg(i2c->dev,484484+ "missing START before write->read\n");483485 s3c24xx_i2c_stop(i2c, -EINVAL);486486+ break;484487 }485488486489 goto retry_write;
···524524}525525526526/**527527- * st_i2c_handle_write() - Handle FIFO enmpty interrupt in case of read527527+ * st_i2c_handle_read() - Handle FIFO empty interrupt in case of read528528 * @i2c_dev: Controller's private data529529 */530530static void st_i2c_handle_read(struct st_i2c_dev *i2c_dev)···558558}559559560560/**561561- * st_i2c_isr() - Interrupt routine561561+ * st_i2c_isr_thread() - Interrupt routine562562 * @irq: interrupt number563563 * @data: Controller's private data564564 */
+1-1
drivers/i2c/busses/i2c-stm32f4.c
···313313}314314315315/**316316- * stm32f4_i2c_write_ byte() - Write a byte in the data register316316+ * stm32f4_i2c_write_byte() - Write a byte in the data register317317 * @i2c_dev: Controller's private data318318 * @byte: Data to write in the register319319 */
+2-2
drivers/i2c/muxes/i2c-arb-gpio-challenge.c
···3434};353536363737-/**3737+/*3838 * i2c_arbitrator_select - claim the I2C bus3939 *4040 * Use the GPIO-based signalling protocol; return -EBUSY if we fail.···7777 return -EBUSY;7878}79798080-/**8080+/*8181 * i2c_arbitrator_deselect - release the I2C bus8282 *8383 * Release the I2C bus using the GPIO-based signalling protocol.
+20-16
drivers/iio/adc/ad7124.c
···771771 if (ret)772772 goto err;773773774774+ if (channel >= indio_dev->num_channels) {775775+ dev_err(indio_dev->dev.parent,776776+ "Channel index >= number of channels\n");777777+ ret = -EINVAL;778778+ goto err;779779+ }780780+774781 ret = of_property_read_u32_array(child, "diff-channels",775782 ain, 2);776783 if (ret)···857850 return ret;858851}859852853853+static void ad7124_reg_disable(void *r)854854+{855855+ regulator_disable(r);856856+}857857+860858static int ad7124_probe(struct spi_device *spi)861859{862860 const struct ad7124_chip_info *info;···907895 ret = regulator_enable(st->vref[i]);908896 if (ret)909897 return ret;898898+899899+ ret = devm_add_action_or_reset(&spi->dev, ad7124_reg_disable,900900+ st->vref[i]);901901+ if (ret)902902+ return ret;910903 }911904912905 st->mclk = devm_clk_get(&spi->dev, "mclk");913913- if (IS_ERR(st->mclk)) {914914- ret = PTR_ERR(st->mclk);915915- goto error_regulator_disable;916916- }906906+ if (IS_ERR(st->mclk))907907+ return PTR_ERR(st->mclk);917908918909 ret = clk_prepare_enable(st->mclk);919910 if (ret < 0)920920- goto error_regulator_disable;911911+ return ret;921912922913 ret = ad7124_soft_reset(st);923914 if (ret < 0)···950935 ad_sd_cleanup_buffer_and_trigger(indio_dev);951936error_clk_disable_unprepare:952937 clk_disable_unprepare(st->mclk);953953-error_regulator_disable:954954- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {955955- if (!IS_ERR_OR_NULL(st->vref[i]))956956- regulator_disable(st->vref[i]);957957- }958938959939 return ret;960940}···958948{959949 struct iio_dev *indio_dev = spi_get_drvdata(spi);960950 struct ad7124_state *st = iio_priv(indio_dev);961961- int i;962951963952 iio_device_unregister(indio_dev);964953 ad_sd_cleanup_buffer_and_trigger(indio_dev);965954 clk_disable_unprepare(st->mclk);966966-967967- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {968968- if (!IS_ERR_OR_NULL(st->vref[i]))969969- regulator_disable(st->vref[i]);970970- }971955972956 return 0;973957}
+10-9
drivers/iio/adc/ad7192.c
···912912{913913 struct ad7192_state *st;914914 struct iio_dev *indio_dev;915915- int ret, voltage_uv = 0;915915+ int ret;916916917917 if (!spi->irq) {918918 dev_err(&spi->dev, "no IRQ?\n");···949949 goto error_disable_avdd;950950 }951951952952- voltage_uv = regulator_get_voltage(st->avdd);953953-954954- if (voltage_uv > 0) {955955- st->int_vref_mv = voltage_uv / 1000;956956- } else {957957- ret = voltage_uv;952952+ ret = regulator_get_voltage(st->avdd);953953+ if (ret < 0) {958954 dev_err(&spi->dev, "Device tree error, reference voltage undefined\n");959955 goto error_disable_avdd;960956 }957957+ st->int_vref_mv = ret / 1000;961958962959 spi_set_drvdata(spi, indio_dev);963960 st->chip_info = of_device_get_match_data(&spi->dev);···10111014 return 0;1012101510131016error_disable_clk:10141014- clk_disable_unprepare(st->mclk);10171017+ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||10181018+ st->clock_sel == AD7192_CLK_EXT_MCLK2)10191019+ clk_disable_unprepare(st->mclk);10151020error_remove_trigger:10161021 ad_sd_cleanup_buffer_and_trigger(indio_dev);10171022error_disable_dvdd:···10301031 struct ad7192_state *st = iio_priv(indio_dev);1031103210321033 iio_device_unregister(indio_dev);10331033- clk_disable_unprepare(st->mclk);10341034+ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||10351035+ st->clock_sel == AD7192_CLK_EXT_MCLK2)10361036+ clk_disable_unprepare(st->mclk);10341037 ad_sd_cleanup_buffer_and_trigger(indio_dev);1035103810361039 regulator_disable(st->dvdd);
+6-2
drivers/iio/adc/ad7768-1.c
···167167 * transfer buffers to live in their own cache lines.168168 */169169 union {170170+ struct {171171+ __be32 chan;172172+ s64 timestamp;173173+ } scan;170174 __be32 d32;171175 u8 d8[2];172176 } data ____cacheline_aligned;···473469474470 mutex_lock(&st->lock);475471476476- ret = spi_read(st->spi, &st->data.d32, 3);472472+ ret = spi_read(st->spi, &st->data.scan.chan, 3);477473 if (ret < 0)478474 goto err_unlock;479475480480- iio_push_to_buffers_with_timestamp(indio_dev, &st->data.d32,476476+ iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,481477 iio_get_time_ns(indio_dev));482478483479 iio_trigger_notify_done(indio_dev->trig);
+1
drivers/iio/adc/ad7793.c
···279279 id &= AD7793_ID_MASK;280280281281 if (id != st->chip_info->id) {282282+ ret = -ENODEV;282283 dev_err(&st->sd.spi->dev, "device ID query failed\n");283284 goto out;284285 }
+3-1
drivers/iio/adc/ad7923.c
···5959 /*6060 * DMA (thus cache coherency maintenance) requires the6161 * transfer buffers to live in their own cache lines.6262+ * Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp6363+ * Length = 8 channels + 4 extra for 8 byte timestamp6264 */6363- __be16 rx_buf[4] ____cacheline_aligned;6565+ __be16 rx_buf[12] ____cacheline_aligned;6466 __be16 tx_buf[4];6567};6668
+11-5
drivers/iio/dac/ad5770r.c
···524524 device_for_each_child_node(&st->spi->dev, child) {525525 ret = fwnode_property_read_u32(child, "num", &num);526526 if (ret)527527- return ret;528528- if (num >= AD5770R_MAX_CHANNELS)529529- return -EINVAL;527527+ goto err_child_out;528528+ if (num >= AD5770R_MAX_CHANNELS) {529529+ ret = -EINVAL;530530+ goto err_child_out;531531+ }530532531533 ret = fwnode_property_read_u32_array(child,532534 "adi,range-microamp",533535 tmp, 2);534536 if (ret)535535- return ret;537537+ goto err_child_out;536538537539 min = tmp[0] / 1000;538540 max = tmp[1] / 1000;539541 ret = ad5770r_store_output_range(st, min, max, num);540542 if (ret)541541- return ret;543543+ goto err_child_out;542544 }543545546546+ return 0;547547+548548+err_child_out:549549+ fwnode_handle_put(child);544550 return ret;545551}546552
+2
drivers/iio/gyro/fxas21002c_core.c
···399399 ret = regmap_field_read(data->regmap_fields[F_TEMP], &temp);400400 if (ret < 0) {401401 dev_err(dev, "failed to read temp: %d\n", ret);402402+ fxas21002c_pm_put(data);402403 goto data_unlock;403404 }404405···433432 &axis_be, sizeof(axis_be));434433 if (ret < 0) {435434 dev_err(dev, "failed to read axis: %d: %d\n", index, ret);435435+ fxas21002c_pm_put(data);436436 goto data_unlock;437437 }438438
···25252525 struct device *dev,25262526 u32 pasid)25272527{25282528- int flags = PASID_FLAG_SUPERVISOR_MODE;25292528 struct dma_pte *pgd = domain->pgd;25302529 int agaw, level;25302530+ int flags = 0;2531253125322532 /*25332533 * Skip top levels of page tables for iommu which has···25432543 if (level != 4 && level != 5)25442544 return -EINVAL;2545254525462546- flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;25462546+ if (pasid != PASID_RID2PASID)25472547+ flags |= PASID_FLAG_SUPERVISOR_MODE;25482548+ if (level == 5)25492549+ flags |= PASID_FLAG_FL5LP;2547255025482551 if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)25492552 flags |= PASID_FLAG_PAGE_SNOOP;···4609460646104607 if (!sinfo) {46114608 sinfo = kzalloc(sizeof(*sinfo), GFP_ATOMIC);46094609+ if (!sinfo)46104610+ return -ENOMEM;46124611 sinfo->domain = domain;46134612 sinfo->pdev = dev;46144613 list_add(&sinfo->link_phys, &info->subdevices);
+2-1
drivers/iommu/intel/pasid.c
···699699 * Since it is a second level only translation setup, we should700700 * set SRE bit as well (addresses are expected to be GPAs).701701 */702702- pasid_set_sre(pte);702702+ if (pasid != PASID_RID2PASID)703703+ pasid_set_sre(pte);703704 pasid_set_present(pte);704705 pasid_flush_caches(iommu, pte, pasid, did);705706
···1313#include <linux/platform_device.h>1414#include <linux/delay.h>1515#include <linux/mtd/mtd.h>1616+#include <linux/mtd/nand-ecc-sw-hamming.h>1617#include <linux/mtd/rawnand.h>1718#include <linux/mtd/partitions.h>1819#include <linux/io.h>···194193 int stat;195194196195 for (eccsize = chip->ecc.size; eccsize > 0; eccsize -= 256) {197197- stat = rawnand_sw_hamming_correct(chip, buf, read_ecc,198198- calc_ecc);196196+ stat = ecc_sw_hamming_correct(buf, read_ecc, calc_ecc,197197+ chip->ecc.size, false);199198 if (stat < 0)200199 return stat;201200 corrected += stat;
+14-12
drivers/mtd/parsers/ofpart_core.c
···5757 if (!mtd_node)5858 return 0;59596060- ofpart_node = of_get_child_by_name(mtd_node, "partitions");6161- if (!ofpart_node && !master->parent) {6262- /*6363- * We might get here even when ofpart isn't used at all (e.g.,6464- * when using another parser), so don't be louder than6565- * KERN_DEBUG6666- */6767- pr_debug("%s: 'partitions' subnode not found on %pOF. Trying to parse direct subnodes as partitions.\n",6868- master->name, mtd_node);6060+ if (!master->parent) { /* Master */6161+ ofpart_node = of_get_child_by_name(mtd_node, "partitions");6262+ if (!ofpart_node) {6363+ /*6464+ * We might get here even when ofpart isn't used at all (e.g.,6565+ * when using another parser), so don't be louder than6666+ * KERN_DEBUG6767+ */6868+ pr_debug("%s: 'partitions' subnode not found on %pOF. Trying to parse direct subnodes as partitions.\n",6969+ master->name, mtd_node);7070+ ofpart_node = mtd_node;7171+ dedicated = false;7272+ }7373+ } else { /* Partition */6974 ofpart_node = mtd_node;7070- dedicated = false;7175 }7272- if (!ofpart_node)7373- return 0;74767577 of_id = of_match_node(parse_ofpart_match_table, ofpart_node);7678 if (dedicated && !of_id) {
+2-2
drivers/net/appletalk/cops.c
···327327 break;328328 }329329330330+ dev->base_addr = ioaddr;331331+330332 /* Reserve any actual interrupt. */331333 if (dev->irq) {332334 retval = request_irq(dev->irq, cops_interrupt, 0, dev->name, dev);333335 if (retval)334336 goto err_out;335337 }336336-337337- dev->base_addr = ioaddr;338338339339 lp = netdev_priv(dev);340340 spin_lock_init(&lp->lock);
···15301530 .num_statics = 16,15311531 .cpu_ports = 0x7F, /* can be configured as cpu port */15321532 .port_cnt = 7, /* total physical port count */15331533+ .phy_errata_9477 = true,15331534 },15341535};15351536
-8
drivers/net/dsa/mt7530.c
···12621262{12631263 struct mt7530_priv *priv = ds->priv;1264126412651265- /* The real fabric path would be decided on the membership in the12661266- * entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS12671267- * means potential VLAN can be consisting of certain subset of all12681268- * ports.12691269- */12701270- mt7530_rmw(priv, MT7530_PCR_P(port),12711271- PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));12721272-12731265 /* Trapped into security mode allows packet forwarding through VLAN12741266 * table lookup. CPU port is set to fallback mode to let untagged12751267 * frames pass through.
+10-5
drivers/net/dsa/ocelot/felix_vsc9959.c
···12271227 if (taprio->num_entries > VSC9959_TAS_GCL_ENTRY_MAX)12281228 return -ERANGE;1229122912301230- /* Set port num and disable ALWAYS_GUARD_BAND_SCH_Q, which means set12311231- * guard band to be implemented for nonschedule queues to schedule12321232- * queues transition.12301230+ /* Enable guard band. The switch will schedule frames without taking12311231+ * their length into account. Thus we'll always need to enable the12321232+ * guard band which reserves the time of a maximum sized frame at the12331233+ * end of the time window.12341234+ *12351235+ * Although the ALWAYS_GUARD_BAND_SCH_Q bit is global for all ports, we12361236+ * need to set PORT_NUM, because subsequent writes to PARAM_CFG_REG_n12371237+ * operate on the port number.12331238 */12341234- ocelot_rmw(ocelot,12351235- QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port),12391239+ ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(port) |12401240+ QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,12361241 QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M |12371242 QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q,12381243 QSYS_TAS_PARAM_CFG_CTRL);
···2626#include "sja1105_tas.h"27272828#define SJA1105_UNKNOWN_MULTICAST 0x010000000000ull2929+#define SJA1105_DEFAULT_VLAN (VLAN_N_VID - 1)29303031static const struct dsa_switch_ops sja1105_switch_ops;3132···208207 default:209208 dev_err(dev, "Unsupported PHY mode %s!\n",210209 phy_modes(ports[i].phy_mode));210210+ return -EINVAL;211211 }212212213213 /* Even though the SerDes port is able to drive SGMII autoneg···323321 return 0;324322}325323324324+/* Set up a default VLAN for untagged traffic injected from the CPU325325+ * using management routes (e.g. STP, PTP) as opposed to tag_8021q.326326+ * All DT-defined ports are members of this VLAN, and there are no327327+ * restrictions on forwarding (since the CPU selects the destination).328328+ * Frames from this VLAN will always be transmitted as untagged, and329329+ * neither the bridge nor the 8021q module cannot create this VLAN ID.330330+ */326331static int sja1105_init_static_vlan(struct sja1105_private *priv)327332{328333 struct sja1105_table *table;···339330 .vmemb_port = 0,340331 .vlan_bc = 0,341332 .tag_port = 0,342342- .vlanid = 1,333333+ .vlanid = SJA1105_DEFAULT_VLAN,343334 };344335 struct dsa_switch *ds = priv->ds;345336 int port;346337347338 table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];348339349349- /* The static VLAN table will only contain the initial pvid of 1.350350- * All other VLANs are to be configured through dynamic entries,351351- * and kept in the static configuration table as backing memory.352352- */353340 if (table->entry_count) {354341 kfree(table->entries);355342 table->entry_count = 0;···358353359354 table->entry_count = 1;360355361361- /* VLAN 1: all DT-defined ports are members; no restrictions on362362- * forwarding; always transmit as untagged.363363- */364356 for (port = 0; port < ds->num_ports; port++) {365357 struct sja1105_bridge_vlan *v;366358···368366 pvid.vlan_bc |= BIT(port);369367 pvid.tag_port &= ~BIT(port);370368371371- /* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be372372- * transmitted as untagged.373373- */374369 v = kzalloc(sizeof(*v), GFP_KERNEL);375370 if (!v)376371 return -ENOMEM;377372378373 v->port = port;379379- v->vid = 1;374374+ v->vid = SJA1105_DEFAULT_VLAN;380375 v->untagged = true;381376 if (dsa_is_cpu_port(ds, port))382377 v->pvid = true;···28162817 bool pvid = flags & BRIDGE_VLAN_INFO_PVID;28172818 struct sja1105_bridge_vlan *v;2818281928192819- list_for_each_entry(v, vlan_list, list)28202820- if (v->port == port && v->vid == vid &&28212821- v->untagged == untagged && v->pvid == pvid)28202820+ list_for_each_entry(v, vlan_list, list) {28212821+ if (v->port == port && v->vid == vid) {28222822 /* Already added */28232823- return 0;28232823+ if (v->untagged == untagged && v->pvid == pvid)28242824+ /* Nothing changed */28252825+ return 0;28262826+28272827+ /* It's the same VLAN, but some of the flags changed28282828+ * and the user did not bother to delete it first.28292829+ * Update it and trigger sja1105_build_vlan_table.28302830+ */28312831+ v->untagged = untagged;28322832+ v->pvid = pvid;28332833+ return 1;28342834+ }28352835+ }2824283628252837 v = kzalloc(sizeof(*v), GFP_KERNEL);28262838 if (!v) {···29862976 rc = sja1105_static_config_load(priv, ports);29872977 if (rc < 0) {29882978 dev_err(ds->dev, "Failed to load static config: %d\n", rc);29892989- return rc;29792979+ goto out_ptp_clock_unregister;29902980 }29912981 /* Configure the CGU (PHY link modes and speeds) */29922982 rc = sja1105_clocking_setup(priv);29932983 if (rc < 0) {29942984 dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);29952995- return rc;29852985+ goto out_static_config_free;29962986 }29972987 /* On SJA1105, VLAN filtering per se is always enabled in hardware.29982988 * The only thing we can do to disable it is lie about what the 802.1Q···3013300330143004 rc = sja1105_devlink_setup(ds);30153005 if (rc < 0)30163016- return rc;30063006+ goto out_static_config_free;3017300730183008 /* The DSA/switchdev model brings up switch ports in standalone mode by30193009 * default, and that means vlan_filtering is 0 since they're not under···30223012 rtnl_lock();30233013 rc = sja1105_setup_8021q_tagging(ds, true);30243014 rtnl_unlock();30153015+ if (rc)30163016+ goto out_devlink_teardown;30173017+30183018+ return 0;30193019+30203020+out_devlink_teardown:30213021+ sja1105_devlink_teardown(ds);30223022+out_ptp_clock_unregister:30233023+ sja1105_ptp_clock_unregister(ds);30243024+out_static_config_free:30253025+ sja1105_static_config_free(&priv->static_config);3025302630263027 return rc;30273028}···36673646 priv->cbs = devm_kcalloc(dev, priv->info->num_cbs_shapers,36683647 sizeof(struct sja1105_cbs_entry),36693648 GFP_KERNEL);36703670- if (!priv->cbs)36713671- return -ENOMEM;36493649+ if (!priv->cbs) {36503650+ rc = -ENOMEM;36513651+ goto out_unregister_switch;36523652+ }36723653 }3673365436743655 /* Connections between dsa_port and sja1105_port */···36953672 dev_err(ds->dev,36963673 "failed to create deferred xmit thread: %d\n",36973674 rc);36983698- goto out;36753675+ goto out_destroy_workers;36993676 }37003677 skb_queue_head_init(&sp->xmit_queue);37013678 sp->xmit_tpid = ETH_P_SJA1105;···37053682 }3706368337073684 return 0;37083708-out:36853685+36863686+out_destroy_workers:37093687 while (port-- > 0) {37103688 struct sja1105_port *sp = &priv->ports[port];37113689···3715369137163692 kthread_destroy_worker(sp->xmit_worker);37173693 }36943694+36953695+out_unregister_switch:36963696+ dsa_unregister_switch(ds);36973697+37183698 return rc;37193699}37203700
+1-1
drivers/net/ethernet/broadcom/bnx2.c
···82478247 BNX2_WR(bp, PCI_COMMAND, reg);82488248 } else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&82498249 !(bp->flags & BNX2_FLAG_PCIX)) {82508250-82518250 dev_err(&pdev->dev,82528251 "5706 A1 can only be used in a PCIX bus, aborting\n");82528252+ rc = -EPERM;82538253 goto err_out_unmap;82548254 }82558255
+3-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
···12231223 goto failed;1224122412251225 /* SR-IOV capability was enabled but there are no VFs*/12261226- if (iov->total == 0)12261226+ if (iov->total == 0) {12271227+ err = -EINVAL;12271228 goto failed;12291229+ }1228123012291231 iov->nr_virtfn = min_t(u16, iov->total, num_vfs_param);12301232
+110-28
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···282282{283283 return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||284284 idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||285285- idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF);285285+ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF ||286286+ idx == NETXTREME_E_P5_VF_HV);286287}287288288289#define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID)···69336932static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr,69346933 __le64 *pg_dir)69356934{69366936- u8 pg_size = 0;69376937-69386935 if (!rmem->nr_pages)69396936 return;6940693769416941- if (BNXT_PAGE_SHIFT == 13)69426942- pg_size = 1 << 4;69436943- else if (BNXT_PAGE_SIZE == 16)69446944- pg_size = 2 << 4;69456945-69466946- *pg_attr = pg_size;69386938+ BNXT_SET_CTX_PAGE_ATTR(*pg_attr);69476939 if (rmem->depth >= 1) {69486940 if (rmem->depth == 2)69496941 *pg_attr |= 2;···1077910785 return rc;1078010786}10781107871078810788+static bool bnxt_exthdr_check(struct bnxt *bp, struct sk_buff *skb, int nw_off,1078910789+ u8 **nextp)1079010790+{1079110791+ struct ipv6hdr *ip6h = (struct ipv6hdr *)(skb->data + nw_off);1079210792+ int hdr_count = 0;1079310793+ u8 *nexthdr;1079410794+ int start;1079510795+1079610796+ /* Check that there are at most 2 IPv6 extension headers, no1079710797+ * fragment header, and each is <= 64 bytes.1079810798+ */1079910799+ start = nw_off + sizeof(*ip6h);1080010800+ nexthdr = &ip6h->nexthdr;1080110801+ while (ipv6_ext_hdr(*nexthdr)) {1080210802+ struct ipv6_opt_hdr *hp;1080310803+ int hdrlen;1080410804+1080510805+ if (hdr_count >= 3 || *nexthdr == NEXTHDR_NONE ||1080610806+ *nexthdr == NEXTHDR_FRAGMENT)1080710807+ return false;1080810808+ hp = __skb_header_pointer(NULL, start, sizeof(*hp), skb->data,1080910809+ skb_headlen(skb), NULL);1081010810+ if (!hp)1081110811+ return false;1081210812+ if (*nexthdr == NEXTHDR_AUTH)1081310813+ hdrlen = ipv6_authlen(hp);1081410814+ else1081510815+ hdrlen = ipv6_optlen(hp);1081610816+1081710817+ if (hdrlen > 64)1081810818+ return false;1081910819+ nexthdr = &hp->nexthdr;1082010820+ start += hdrlen;1082110821+ hdr_count++;1082210822+ }1082310823+ if (nextp) {1082410824+ /* Caller will check inner protocol */1082510825+ if (skb->encapsulation) {1082610826+ *nextp = nexthdr;1082710827+ return true;1082810828+ }1082910829+ *nextp = NULL;1083010830+ }1083110831+ /* Only support TCP/UDP for non-tunneled ipv6 and inner ipv6 */1083210832+ return *nexthdr == IPPROTO_TCP || *nexthdr == IPPROTO_UDP;1083310833+}1083410834+1083510835+/* For UDP, we can only handle 1 Vxlan port and 1 Geneve port. */1083610836+static bool bnxt_udp_tunl_check(struct bnxt *bp, struct sk_buff *skb)1083710837+{1083810838+ struct udphdr *uh = udp_hdr(skb);1083910839+ __be16 udp_port = uh->dest;1084010840+1084110841+ if (udp_port != bp->vxlan_port && udp_port != bp->nge_port)1084210842+ return false;1084310843+ if (skb->inner_protocol_type == ENCAP_TYPE_ETHER) {1084410844+ struct ethhdr *eh = inner_eth_hdr(skb);1084510845+1084610846+ switch (eh->h_proto) {1084710847+ case htons(ETH_P_IP):1084810848+ return true;1084910849+ case htons(ETH_P_IPV6):1085010850+ return bnxt_exthdr_check(bp, skb,1085110851+ skb_inner_network_offset(skb),1085210852+ NULL);1085310853+ }1085410854+ }1085510855+ return false;1085610856+}1085710857+1085810858+static bool bnxt_tunl_check(struct bnxt *bp, struct sk_buff *skb, u8 l4_proto)1085910859+{1086010860+ switch (l4_proto) {1086110861+ case IPPROTO_UDP:1086210862+ return bnxt_udp_tunl_check(bp, skb);1086310863+ case IPPROTO_IPIP:1086410864+ return true;1086510865+ case IPPROTO_GRE: {1086610866+ switch (skb->inner_protocol) {1086710867+ default:1086810868+ return false;1086910869+ case htons(ETH_P_IP):1087010870+ return true;1087110871+ case htons(ETH_P_IPV6):1087210872+ fallthrough;1087310873+ }1087410874+ }1087510875+ case IPPROTO_IPV6:1087610876+ /* Check ext headers of inner ipv6 */1087710877+ return bnxt_exthdr_check(bp, skb, skb_inner_network_offset(skb),1087810878+ NULL);1087910879+ }1088010880+ return false;1088110881+}1088210882+1078210883static netdev_features_t bnxt_features_check(struct sk_buff *skb,1078310884 struct net_device *dev,1078410885 netdev_features_t features)1078510886{1078610786- struct bnxt *bp;1078710787- __be16 udp_port;1078810788- u8 l4_proto = 0;1088710887+ struct bnxt *bp = netdev_priv(dev);1088810888+ u8 *l4_proto;10789108891079010890 features = vlan_features_check(skb, features);1079110791- if (!skb->encapsulation)1079210792- return features;1079310793-1079410891 switch (vlan_get_protocol(skb)) {1079510892 case htons(ETH_P_IP):1079610796- l4_proto = ip_hdr(skb)->protocol;1089310893+ if (!skb->encapsulation)1089410894+ return features;1089510895+ l4_proto = &ip_hdr(skb)->protocol;1089610896+ if (bnxt_tunl_check(bp, skb, *l4_proto))1089710897+ return features;1079710898 break;1079810899 case htons(ETH_P_IPV6):1079910799- l4_proto = ipv6_hdr(skb)->nexthdr;1090010900+ if (!bnxt_exthdr_check(bp, skb, skb_network_offset(skb),1090110901+ &l4_proto))1090210902+ break;1090310903+ if (!l4_proto || bnxt_tunl_check(bp, skb, *l4_proto))1090410904+ return features;1080010905 break;1080110801- default:1080210802- return features;1080310906 }1080410804-1080510805- if (l4_proto != IPPROTO_UDP)1080610806- return features;1080710807-1080810808- bp = netdev_priv(dev);1080910809- /* For UDP, we can only handle 1 Vxlan port and 1 Geneve port. */1081010810- udp_port = udp_hdr(skb)->dest;1081110811- if (udp_port == bp->vxlan_port || udp_port == bp->nge_port)1081210812- return features;1081310907 return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);1081410908}1081510909
···16621662}1663166316641664/* ------------------------------------------------------------------------- */16651665-static void fec_get_mac(struct net_device *ndev)16651665+static int fec_get_mac(struct net_device *ndev)16661666{16671667 struct fec_enet_private *fep = netdev_priv(ndev);16681668 unsigned char *iap, tmpaddr[ETH_ALEN];···16851685 ret = of_get_mac_address(np, tmpaddr);16861686 if (!ret)16871687 iap = tmpaddr;16881688+ else if (ret == -EPROBE_DEFER)16891689+ return ret;16881690 }16891691 }16901692···17251723 eth_hw_addr_random(ndev);17261724 dev_info(&fep->pdev->dev, "Using random MAC address: %pM\n",17271725 ndev->dev_addr);17281728- return;17261726+ return 0;17291727 }1730172817311729 memcpy(ndev->dev_addr, iap, ETH_ALEN);···17331731 /* Adjust MAC if using macaddr */17341732 if (iap == macaddr)17351733 ndev->dev_addr[ETH_ALEN-1] = macaddr[ETH_ALEN-1] + fep->dev_id;17341734+17351735+ return 0;17361736}1737173717381738/* ------------------------------------------------------------------------- */···32943290 return ret;32953291 }3296329232973297- fec_enet_alloc_queue(ndev);32933293+ ret = fec_enet_alloc_queue(ndev);32943294+ if (ret)32953295+ return ret;3298329632993297 bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;33003298···33043298 cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,33053299 GFP_KERNEL);33063300 if (!cbd_base) {33073307- return -ENOMEM;33013301+ ret = -ENOMEM;33023302+ goto free_queue_mem;33083303 }3309330433103305 /* Get the Ethernet address */33113311- fec_get_mac(ndev);33063306+ ret = fec_get_mac(ndev);33073307+ if (ret)33083308+ goto free_queue_mem;33093309+33123310 /* make sure MAC we just acquired is programmed into the hw */33133311 fec_set_mac_address(ndev, NULL);33143312···33863376 fec_enet_update_ethtool_stats(ndev);3387337733883378 return 0;33793379+33803380+free_queue_mem:33813381+ fec_enet_free_queue(ndev);33823382+ return ret;33893383}3390338433913385#ifdef CONFIG_OF
+12-9
drivers/net/ethernet/google/gve/gve_main.c
···180180 /* Double check we have no extra work.181181 * Ensure unmask synchronizes with checking for work.182182 */183183- dma_rmb();183183+ mb();184184 if (block->tx)185185 reschedule |= gve_tx_poll(block, -1);186186 if (block->rx)···220220 int vecs_left = new_num_ntfy_blks % 2;221221222222 priv->num_ntfy_blks = new_num_ntfy_blks;223223+ priv->mgmt_msix_idx = priv->num_ntfy_blks;223224 priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues,224225 vecs_per_type);225226 priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues,···301300{302301 int i;303302304304- /* Free the irqs */305305- for (i = 0; i < priv->num_ntfy_blks; i++) {306306- struct gve_notify_block *block = &priv->ntfy_blocks[i];307307- int msix_idx = i;303303+ if (priv->msix_vectors) {304304+ /* Free the irqs */305305+ for (i = 0; i < priv->num_ntfy_blks; i++) {306306+ struct gve_notify_block *block = &priv->ntfy_blocks[i];307307+ int msix_idx = i;308308309309- irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,310310- NULL);311311- free_irq(priv->msix_vectors[msix_idx].vector, block);309309+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,310310+ NULL);311311+ free_irq(priv->msix_vectors[msix_idx].vector, block);312312+ }313313+ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);312314 }313315 dma_free_coherent(&priv->pdev->dev,314316 priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),315317 priv->ntfy_blocks, priv->ntfy_block_bus);316318 priv->ntfy_blocks = NULL;317317- free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);318319 pci_disable_msix(priv->pdev);319320 kvfree(priv->msix_vectors);320321 priv->msix_vectors = NULL;
+7-3
drivers/net/ethernet/google/gve/gve_tx.c
···212212 tx->dev = &priv->pdev->dev;213213 if (!tx->raw_addressing) {214214 tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);215215-215215+ if (!tx->tx_fifo.qpl)216216+ goto abort_with_desc;216217 /* map Tx FIFO */217218 if (gve_tx_fifo_init(priv, &tx->tx_fifo))218218- goto abort_with_desc;219219+ goto abort_with_qpl;219220 }220221221222 tx->q_resources =···237236abort_with_fifo:238237 if (!tx->raw_addressing)239238 gve_tx_fifo_release(priv, &tx->tx_fifo);239239+abort_with_qpl:240240+ if (!tx->raw_addressing)241241+ gve_unassign_qpl(priv, tx->tx_fifo.qpl->id);240242abort_with_desc:241243 dma_free_coherent(hdev, bytes, tx->desc, tx->bus);242244 tx->desc = NULL;···593589 struct gve_tx_ring *tx;594590 int nsegs;595591596596- WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues,592592+ WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues,597593 "skb queue index out of range");598594 tx = &priv->tx[skb_get_queue_mapping(skb)];599595 if (unlikely(gve_maybe_stop_tx(tx, skb))) {
+3-3
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
···192192}193193194194/**195195- *hns_nic_set_link_settings - implement ethtool set link ksettings195195+ *hns_nic_set_link_ksettings - implement ethtool set link ksettings196196 *@net_dev: net_device197197 *@cmd: ethtool_link_ksettings198198 *retuen 0 - success , negative --fail···827827}828828829829/**830830- * get_ethtool_stats - get detail statistics.830830+ * hns_get_ethtool_stats - get detail statistics.831831 * @netdev: net device832832 * @stats: statistics info.833833 * @data: statistics data.···885885}886886887887/**888888- * get_strings: Return a set of strings that describe the requested objects888888+ * hns_get_strings: Return a set of strings that describe the requested objects889889 * @netdev: net device890890 * @stringset: string set ID.891891 * @data: objects data.
+51-59
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
···264264 struct hnae3_ae_dev *ae_dev = pci_get_drvdata(priv->ae_handle->pdev);265265 struct hns3_enet_coalesce *tx_coal = &tqp_vector->tx_group.coal;266266 struct hns3_enet_coalesce *rx_coal = &tqp_vector->rx_group.coal;267267+ struct hns3_enet_coalesce *ptx_coal = &priv->tx_coal;268268+ struct hns3_enet_coalesce *prx_coal = &priv->rx_coal;267269268268- /* initialize the configuration for interrupt coalescing.269269- * 1. GL (Interrupt Gap Limiter)270270- * 2. RL (Interrupt Rate Limiter)271271- * 3. QL (Interrupt Quantity Limiter)272272- *273273- * Default: enable interrupt coalescing self-adaptive and GL274274- */275275- tx_coal->adapt_enable = 1;276276- rx_coal->adapt_enable = 1;270270+ tx_coal->adapt_enable = ptx_coal->adapt_enable;271271+ rx_coal->adapt_enable = prx_coal->adapt_enable;277272278278- tx_coal->int_gl = HNS3_INT_GL_50K;279279- rx_coal->int_gl = HNS3_INT_GL_50K;273273+ tx_coal->int_gl = ptx_coal->int_gl;274274+ rx_coal->int_gl = prx_coal->int_gl;280275281281- rx_coal->flow_level = HNS3_FLOW_LOW;282282- tx_coal->flow_level = HNS3_FLOW_LOW;276276+ rx_coal->flow_level = prx_coal->flow_level;277277+ tx_coal->flow_level = ptx_coal->flow_level;283278284279 /* device version above V3(include V3), GL can configure 1us285280 * unit, so uses 1us unit.···289294 rx_coal->ql_enable = 1;290295 tx_coal->int_ql_max = ae_dev->dev_specs.int_ql_max;291296 rx_coal->int_ql_max = ae_dev->dev_specs.int_ql_max;292292- tx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG;293293- rx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG;297297+ tx_coal->int_ql = ptx_coal->int_ql;298298+ rx_coal->int_ql = prx_coal->int_ql;294299 }295300}296301···841846 l4.udp->dest == htons(4790))))842847 return false;843848844844- skb_checksum_help(skb);845845-846849 return true;847850}848851···917924 /* the stack computes the IP header already,918925 * driver calculate l4 checksum when not TSO.919926 */920920- skb_checksum_help(skb);921921- return 0;927927+ return skb_checksum_help(skb);922928 }923929924930 hns3_set_outer_l2l3l4(skb, ol4_proto, ol_type_vlan_len_msec);···962970 break;963971 case IPPROTO_UDP:964972 if (hns3_tunnel_csum_bug(skb))965965- break;973973+ return skb_checksum_help(skb);966974967975 hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);968976 hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,···987995 /* the stack computes the IP header already,988996 * driver calculate l4 checksum when not TSO.989997 */990990- skb_checksum_help(skb);991991- return 0;998998+ return skb_checksum_help(skb);992999 }99310009941001 return 0;···38353844 return ret;38363845}3837384638473847+static void hns3_nic_init_coal_cfg(struct hns3_nic_priv *priv)38483848+{38493849+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(priv->ae_handle->pdev);38503850+ struct hns3_enet_coalesce *tx_coal = &priv->tx_coal;38513851+ struct hns3_enet_coalesce *rx_coal = &priv->rx_coal;38523852+38533853+ /* initialize the configuration for interrupt coalescing.38543854+ * 1. GL (Interrupt Gap Limiter)38553855+ * 2. RL (Interrupt Rate Limiter)38563856+ * 3. QL (Interrupt Quantity Limiter)38573857+ *38583858+ * Default: enable interrupt coalescing self-adaptive and GL38593859+ */38603860+ tx_coal->adapt_enable = 1;38613861+ rx_coal->adapt_enable = 1;38623862+38633863+ tx_coal->int_gl = HNS3_INT_GL_50K;38643864+ rx_coal->int_gl = HNS3_INT_GL_50K;38653865+38663866+ rx_coal->flow_level = HNS3_FLOW_LOW;38673867+ tx_coal->flow_level = HNS3_FLOW_LOW;38683868+38693869+ if (ae_dev->dev_specs.int_ql_max) {38703870+ tx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG;38713871+ rx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG;38723872+ }38733873+}38743874+38383875static int hns3_nic_alloc_vector_data(struct hns3_nic_priv *priv)38393876{38403877 struct hnae3_handle *h = priv->ae_handle;···43144295 goto out_get_ring_cfg;43154296 }4316429742984298+ hns3_nic_init_coal_cfg(priv);42994299+43174300 ret = hns3_nic_alloc_vector_data(priv);43184301 if (ret) {43194302 ret = -ENOMEM;···43374316 ret = hns3_init_phy(netdev);43384317 if (ret)43394318 goto out_init_phy;43404340-43414341- ret = register_netdev(netdev);43424342- if (ret) {43434343- dev_err(priv->dev, "probe register netdev fail!\n");43444344- goto out_reg_netdev_fail;43454345- }4346431943474320 /* the device can work without cpu rmap, only aRFS needs it */43484321 ret = hns3_set_rx_cpu_rmap(netdev);···43704355 if (ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3)43714356 set_bit(HNAE3_PFLAG_LIMIT_PROMISC, &handle->supported_pflags);4372435743584358+ ret = register_netdev(netdev);43594359+ if (ret) {43604360+ dev_err(priv->dev, "probe register netdev fail!\n");43614361+ goto out_reg_netdev_fail;43624362+ }43634363+43734364 if (netif_msg_drv(handle))43744365 hns3_info_show(priv);4375436643764367 return ret;4377436843694369+out_reg_netdev_fail:43704370+ hns3_dbg_uninit(handle);43784371out_client_start:43794372 hns3_free_rx_cpu_rmap(netdev);43804373 hns3_nic_uninit_irq(priv);43814374out_init_irq_fail:43824382- unregister_netdev(netdev);43834383-out_reg_netdev_fail:43844375 hns3_uninit_phy(netdev);43854376out_init_phy:43864377 hns3_uninit_all_ring(priv);···45924571 return 0;45934572}4594457345954595-static void hns3_store_coal(struct hns3_nic_priv *priv)45964596-{45974597- /* ethtool only support setting and querying one coal45984598- * configuration for now, so save the vector 0' coal45994599- * configuration here in order to restore it.46004600- */46014601- memcpy(&priv->tx_coal, &priv->tqp_vector[0].tx_group.coal,46024602- sizeof(struct hns3_enet_coalesce));46034603- memcpy(&priv->rx_coal, &priv->tqp_vector[0].rx_group.coal,46044604- sizeof(struct hns3_enet_coalesce));46054605-}46064606-46074607-static void hns3_restore_coal(struct hns3_nic_priv *priv)46084608-{46094609- u16 vector_num = priv->vector_num;46104610- int i;46114611-46124612- for (i = 0; i < vector_num; i++) {46134613- memcpy(&priv->tqp_vector[i].tx_group.coal, &priv->tx_coal,46144614- sizeof(struct hns3_enet_coalesce));46154615- memcpy(&priv->tqp_vector[i].rx_group.coal, &priv->rx_coal,46164616- sizeof(struct hns3_enet_coalesce));46174617- }46184618-}46194619-46204574static int hns3_reset_notify_down_enet(struct hnae3_handle *handle)46214575{46224576 struct hnae3_knic_private_info *kinfo = &handle->kinfo;···46494653 ret = hns3_nic_alloc_vector_data(priv);46504654 if (ret)46514655 goto err_put_ring;46524652-46534653- hns3_restore_coal(priv);4654465646554657 ret = hns3_nic_init_vector_data(priv);46564658 if (ret)···47144720 hns3_reset_tx_queue(priv->ae_handle);4715472147164722 hns3_nic_uninit_vector_data(priv);47174717-47184718- hns3_store_coal(priv);4719472347204724 hns3_nic_dealloc_vector_data(priv);47214725
···710710 unsigned int flag;711711 int ret = 0;712712713713- memset(&resp_msg, 0, sizeof(resp_msg));714713 /* handle all the mailbox requests in the queue */715714 while (!hclge_cmd_crq_empty(&hdev->hw)) {716715 if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {···736737 vport = &hdev->vport[req->mbx_src_vfid];737738738739 trace_hclge_pf_mbx_get(hdev, req);740740+741741+ /* clear the resp_msg before processing every mailbox message */742742+ memset(&resp_msg, 0, sizeof(resp_msg));739743740744 switch (req->msg.code) {741745 case HCLGE_MBX_MAP_RING_TO_VECTOR:
+7-9
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
···467467 return err;468468}469469470470-static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)470470+static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf)471471{472472 struct ixgbe_hw *hw = &adapter->hw;473473- int max_frame = msgbuf[1];474473 u32 max_frs;474474+475475+ if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {476476+ e_err(drv, "VF max_frame %d out of range\n", max_frame);477477+ return -EINVAL;478478+ }475479476480 /*477481 * For 82599EB we have to keep all PFs and VFs operating with···535531 e_err(drv, "VF max_frame %d out of range\n", max_frame);536532 return err;537533 }538538- }539539-540540- /* MTU < 68 is an error and causes problems on some kernels */541541- if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {542542- e_err(drv, "VF max_frame %d out of range\n", max_frame);543543- return -EINVAL;544534 }545535546536 /* pull current max frame size from hardware */···12471249 retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);12481250 break;12491251 case IXGBE_VF_SET_LPE:12501250- retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf);12521252+ retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf);12511253 break;12521254 case IXGBE_VF_SET_MACVLAN:12531255 retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
+6-6
drivers/net/ethernet/korina.c
···13151315 lp->tx_irq = platform_get_irq_byname(pdev, "tx");1316131613171317 p = devm_platform_ioremap_resource_byname(pdev, "emac");13181318- if (!p) {13181318+ if (IS_ERR(p)) {13191319 printk(KERN_ERR DRV_NAME ": cannot remap registers\n");13201320- return -ENOMEM;13201320+ return PTR_ERR(p);13211321 }13221322 lp->eth_regs = p;1323132313241324 p = devm_platform_ioremap_resource_byname(pdev, "dma_rx");13251325- if (!p) {13251325+ if (IS_ERR(p)) {13261326 printk(KERN_ERR DRV_NAME ": cannot remap Rx DMA registers\n");13271327- return -ENOMEM;13271327+ return PTR_ERR(p);13281328 }13291329 lp->rx_dma_regs = p;1330133013311331 p = devm_platform_ioremap_resource_byname(pdev, "dma_tx");13321332- if (!p) {13321332+ if (IS_ERR(p)) {13331333 printk(KERN_ERR DRV_NAME ": cannot remap Tx DMA registers\n");13341334- return -ENOMEM;13341334+ return PTR_ERR(p);13351335 }13361336 lp->tx_dma_regs = p;13371337
+9-5
drivers/net/ethernet/lantiq_xrx200.c
···154154155155static int xrx200_alloc_skb(struct xrx200_chan *ch)156156{157157+ dma_addr_t mapping;157158 int ret = 0;158159159160 ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,···164163 goto skip;165164 }166165167167- ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev,168168- ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN,169169- DMA_FROM_DEVICE);170170- if (unlikely(dma_mapping_error(ch->priv->dev,171171- ch->dma.desc_base[ch->dma.desc].addr))) {166166+ mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,167167+ XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE);168168+ if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {172169 dev_kfree_skb_any(ch->skb[ch->dma.desc]);173170 ret = -ENOMEM;174171 goto skip;175172 }176173174174+ ch->dma.desc_base[ch->dma.desc].addr = mapping;175175+ /* Make sure the address is written before we give it to HW */176176+ wmb();177177skip:178178 ch->dma.desc_base[ch->dma.desc].ctl =179179 LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |···198196 ch->dma.desc %= LTQ_DESC_NUM;199197200198 if (ret) {199199+ ch->skb[ch->dma.desc] = skb;200200+ net_dev->stats.rx_dropped++;201201 netdev_err(net_dev, "failed to allocate new rx buffer\n");202202 return ret;203203 }
···38393839 return ret;38403840}3841384138423842+static void mvpp2_buff_hdr_pool_put(struct mvpp2_port *port, struct mvpp2_rx_desc *rx_desc,38433843+ int pool, u32 rx_status)38443844+{38453845+ phys_addr_t phys_addr, phys_addr_next;38463846+ dma_addr_t dma_addr, dma_addr_next;38473847+ struct mvpp2_buff_hdr *buff_hdr;38483848+38493849+ phys_addr = mvpp2_rxdesc_dma_addr_get(port, rx_desc);38503850+ dma_addr = mvpp2_rxdesc_cookie_get(port, rx_desc);38513851+38523852+ do {38533853+ buff_hdr = (struct mvpp2_buff_hdr *)phys_to_virt(phys_addr);38543854+38553855+ phys_addr_next = le32_to_cpu(buff_hdr->next_phys_addr);38563856+ dma_addr_next = le32_to_cpu(buff_hdr->next_dma_addr);38573857+38583858+ if (port->priv->hw_version >= MVPP22) {38593859+ phys_addr_next |= ((u64)buff_hdr->next_phys_addr_high << 32);38603860+ dma_addr_next |= ((u64)buff_hdr->next_dma_addr_high << 32);38613861+ }38623862+38633863+ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);38643864+38653865+ phys_addr = phys_addr_next;38663866+ dma_addr = dma_addr_next;38673867+38683868+ } while (!MVPP2_B_HDR_INFO_IS_LAST(le16_to_cpu(buff_hdr->info)));38693869+}38703870+38423871/* Main rx processing */38433872static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,38443873 int rx_todo, struct mvpp2_rx_queue *rxq)···39143885 MVPP2_RXD_BM_POOL_ID_OFFS;39153886 bm_pool = &port->priv->bm_pools[pool];3916388739173917- /* In case of an error, release the requested buffer pointer39183918- * to the Buffer Manager. This request process is controlled39193919- * by the hardware, and the information about the buffer is39203920- * comprised by the RX descriptor.39213921- */39223922- if (rx_status & MVPP2_RXD_ERR_SUMMARY)39233923- goto err_drop_frame;39243924-39253888 if (port->priv->percpu_pools) {39263889 pp = port->priv->page_pool[pool];39273890 dma_dir = page_pool_get_dma_dir(pp);···39243903 dma_sync_single_for_cpu(dev->dev.parent, dma_addr,39253904 rx_bytes + MVPP2_MH_SIZE,39263905 dma_dir);39063906+39073907+ /* Buffer header not supported */39083908+ if (rx_status & MVPP2_RXD_BUF_HDR)39093909+ goto err_drop_frame;39103910+39113911+ /* In case of an error, release the requested buffer pointer39123912+ * to the Buffer Manager. This request process is controlled39133913+ * by the hardware, and the information about the buffer is39143914+ * comprised by the RX descriptor.39153915+ */39163916+ if (rx_status & MVPP2_RXD_ERR_SUMMARY)39173917+ goto err_drop_frame;3927391839283919 /* Prefetch header */39293920 prefetch(data);···40183985 dev->stats.rx_errors++;40193986 mvpp2_rx_error(port, rx_desc);40203987 /* Return the buffer to the pool */40214021- mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);39883988+ if (rx_status & MVPP2_RXD_BUF_HDR)39893989+ mvpp2_buff_hdr_pool_put(port, rx_desc, pool, rx_status);39903990+ else39913991+ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);40223992 }4023399340243994 rcu_read_unlock();
···6565{6666 struct mlx5_flow_table_attr ft_attr = {};6767 struct mlx5_flow_namespace *root_ns;6868- int err;6868+ int err, err2;69697070 root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_FDB);7171 if (!root_ns) {···7676 /* As this is the terminating action then the termination table is the7777 * same prio as the slow path7878 */7979- ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION |7979+ ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | MLX5_FLOW_TABLE_UNMANAGED |8080 MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;8181- ft_attr.prio = FDB_SLOW_PATH;8181+ ft_attr.prio = FDB_TC_OFFLOAD;8282 ft_attr.max_fte = 1;8383+ ft_attr.level = 1;8384 ft_attr.autogroup.max_num_groups = 1;8485 tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr);8586 if (IS_ERR(tt->termtbl)) {8686- esw_warn(dev, "Failed to create termination table (error %d)\n",8787- IS_ERR(tt->termtbl));8888- return -EOPNOTSUPP;8787+ err = PTR_ERR(tt->termtbl);8888+ esw_warn(dev, "Failed to create termination table, err %pe\n", tt->termtbl);8989+ return err;8990 }90919192 tt->rule = mlx5_add_flow_rules(tt->termtbl, NULL, flow_act,9293 &tt->dest, 1);9394 if (IS_ERR(tt->rule)) {9494- esw_warn(dev, "Failed to create termination table rule (error %d)\n",9595- IS_ERR(tt->rule));9595+ err = PTR_ERR(tt->rule);9696+ esw_warn(dev, "Failed to create termination table rule, err %pe\n", tt->rule);9697 goto add_flow_err;9798 }9899 return 0;99100100101add_flow_err:101101- err = mlx5_destroy_flow_table(tt->termtbl);102102- if (err)103103- esw_warn(dev, "Failed to destroy termination table\n");102102+ err2 = mlx5_destroy_flow_table(tt->termtbl);103103+ if (err2)104104+ esw_warn(dev, "Failed to destroy termination table, err %d\n", err2);104105105105- return -EOPNOTSUPP;106106+ return err;106107}107108108109static struct mlx5_termtbl_handle *···173172 }174173}175174176176-static bool mlx5_eswitch_termtbl_is_encap_reformat(struct mlx5_pkt_reformat *rt)177177-{178178- switch (rt->reformat_type) {179179- case MLX5_REFORMAT_TYPE_L2_TO_VXLAN:180180- case MLX5_REFORMAT_TYPE_L2_TO_NVGRE:181181- case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL:182182- case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL:183183- return true;184184- default:185185- return false;186186- }187187-}188188-189175static void190176mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,191177 struct mlx5_flow_act *dst)···189201 memcpy(&dst->vlan[1], &src->vlan[1], sizeof(src->vlan[1]));190202 memset(&src->vlan[1], 0, sizeof(src->vlan[1]));191203 }192192- }193193-194194- if (src->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&195195- mlx5_eswitch_termtbl_is_encap_reformat(src->pkt_reformat)) {196196- src->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;197197- dst->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;198198- dst->pkt_reformat = src->pkt_reformat;199199- src->pkt_reformat = NULL;200204 }201205}202206···218238 int i;219239220240 if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table) ||241241+ !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level) ||221242 attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH ||222243 !mlx5_eswitch_offload_is_uplink_port(esw, spec))223244 return false;···260279 if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)261280 continue;262281282282+ if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) {283283+ term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;284284+ term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat;285285+ } else {286286+ term_tbl_act.action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;287287+ term_tbl_act.pkt_reformat = NULL;288288+ }289289+263290 /* get the terminating table for the action list */264291 tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act,265292 &dest[i], attr);266293 if (IS_ERR(tt)) {267267- esw_warn(esw->dev, "Failed to get termination table (error %d)\n",268268- IS_ERR(tt));294294+ esw_warn(esw->dev, "Failed to get termination table, err %pe\n", tt);269295 goto revert_changes;270296 }271297 attr->dests[num_vport_dests].termtbl = tt;···289301 goto revert_changes;290302291303 /* create the FTE */304304+ flow_act->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;305305+ flow_act->pkt_reformat = NULL;306306+ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;292307 rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);293308 if (IS_ERR(rule))294309 goto revert_changes;
+6
drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
···307307 struct lag_mp *mp = &ldev->lag_mp;308308 int err;309309310310+ /* always clear mfi, as it might become stale when a route delete event311311+ * has been missed312312+ */313313+ mp->mfi = NULL;314314+310315 if (mp->fib_nb.notifier_call)311316 return 0;312317···340335 unregister_fib_notifier(&init_net, &mp->fib_nb);341336 destroy_workqueue(mp->wq);342337 mp->fib_nb.notifier_call = NULL;338338+ mp->mfi = NULL;343339}
···11961196 */11971197static int stmmac_init_phy(struct net_device *dev)11981198{11991199- struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };12001199 struct stmmac_priv *priv = netdev_priv(dev);12011200 struct device_node *node;12021201 int ret;···12211222 ret = phylink_connect_phy(priv->phylink, phydev);12221223 }1223122412241224- phylink_ethtool_get_wol(priv->phylink, &wol);12251225- device_set_wakeup_capable(priv->device, !!wol.supported);12251225+ if (!priv->plat->pmt) {12261226+ struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };12271227+12281228+ phylink_ethtool_get_wol(priv->phylink, &wol);12291229+ device_set_wakeup_capable(priv->device, !!wol.supported);12301230+ }1226123112271232 return ret;12281233}···58915888 struct stmmac_priv *priv = netdev_priv(ndev);58925889 int ret = 0;5893589058915891+ ret = pm_runtime_get_sync(priv->device);58925892+ if (ret < 0) {58935893+ pm_runtime_put_noidle(priv->device);58945894+ return ret;58955895+ }58965896+58945897 ret = eth_mac_addr(ndev, addr);58955898 if (ret)58965896- return ret;58995899+ goto set_mac_error;5897590058985901 stmmac_set_umac_addr(priv, priv->hw, ndev->dev_addr, 0);59025902+59035903+set_mac_error:59045904+ pm_runtime_put(priv->device);5899590559005906 return ret;59015907}···62006188 bool is_double = false;62016189 int ret;6202619062036203- ret = pm_runtime_get_sync(priv->device);62046204- if (ret < 0) {62056205- pm_runtime_put_noidle(priv->device);62066206- return ret;62076207- }62086208-62096191 if (be16_to_cpu(proto) == ETH_P_8021AD)62106192 is_double = true;62116193···62246218 struct stmmac_priv *priv = netdev_priv(ndev);62256219 bool is_double = false;62266220 int ret;62216221+62226222+ ret = pm_runtime_get_sync(priv->device);62236223+ if (ret < 0) {62246224+ pm_runtime_put_noidle(priv->device);62256225+ return ret;62266226+ }6227622762286228 if (be16_to_cpu(proto) == ETH_P_8021AD)62296229 is_double = true;
+2-2
drivers/net/ethernet/ti/netcp_core.c
···13501350 tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,13511351 KNAV_QUEUE_SHARED);13521352 if (IS_ERR(tx_pipe->dma_queue)) {13531353- dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",13541354- name, ret);13531353+ dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n",13541354+ name, tx_pipe->dma_queue);13551355 ret = PTR_ERR(tx_pipe->dma_queue);13561356 goto err;13571357 }
+2
drivers/net/ipa/ipa.h
···5858 * @mem_virt: Virtual address of IPA-local memory space5959 * @mem_offset: Offset from @mem_virt used for access to IPA memory6060 * @mem_size: Total size (bytes) of memory at @mem_virt6161+ * @mem_count: Number of entries in the mem array6162 * @mem: Array of IPA-local memory region descriptors6263 * @imem_iova: I/O virtual address of IPA region in IMEM6364 * @imem_size: Size of IMEM region···104103 void *mem_virt;105104 u32 mem_offset;106105 u32 mem_size;106106+ u32 mem_count;107107 const struct ipa_mem *mem;108108109109 unsigned long imem_iova;
+2-1
drivers/net/ipa/ipa_mem.c
···180180 * for the region, write "canary" values in the space prior to181181 * the region's base address.182182 */183183- for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) {183183+ for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {184184 const struct ipa_mem *mem = &ipa->mem[mem_id];185185 u16 canary_count;186186 __le32 *canary;···487487 ipa->mem_size = resource_size(res);488488489489 /* The ipa->mem[] array is indexed by enum ipa_mem_id values */490490+ ipa->mem_count = mem_data->local_count;490491 ipa->mem = mem_data->local;491492492493 ret = ipa_imem_init(ipa, mem_data->imem_addr, mem_data->imem_size);
···607607 struct mdio_device *mdiodev;608608 int i;609609610610- BUG_ON(bus->state != MDIOBUS_REGISTERED);610610+ if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))611611+ return;611612 bus->state = MDIOBUS_UNREGISTERED;612613613614 for (i = 0; i < PHY_MAX_ADDR; i++) {
+1-1
drivers/net/usb/cdc_eem.c
···2626 * for transport over USB using a simpler USB device model than the2727 * previous CDC "Ethernet Control Model" (ECM, or "CDC Ethernet").2828 *2929- * For details, see www.usb.org/developers/devclass_docs/CDC_EEM10.pdf2929+ * For details, see https://usb.org/sites/default/files/CDC_EEM10.pdf3030 *3131 * This version has been tested with GIGAntIC WuaoW SIM Smart Card on 2.6.24,3232 * 2.6.27 and 2.6.30rc2 kernel.
+22-23
drivers/net/usb/hso.c
···16891689 spin_unlock_irqrestore(&serial->serial_lock, flags);1690169016911691 return usb_control_msg(serial->parent->usb,16921692- usb_rcvctrlpipe(serial->parent->usb, 0), 0x22,16921692+ usb_sndctrlpipe(serial->parent->usb, 0), 0x22,16931693 0x21, val, if_num, NULL, 0,16941694 USB_CTRL_SET_TIMEOUT);16951695}···24362436 if (hso_dev->usb_gone)24372437 rv = 0;24382438 else24392439- rv = usb_control_msg(hso_dev->usb, usb_rcvctrlpipe(hso_dev->usb, 0),24392439+ rv = usb_control_msg(hso_dev->usb, usb_sndctrlpipe(hso_dev->usb, 0),24402440 enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0,24412441 USB_CTRL_SET_TIMEOUT);24422442 mutex_unlock(&hso_dev->mutex);···26182618 num_urbs = 2;26192619 serial->tiocmget = kzalloc(sizeof(struct hso_tiocmget),26202620 GFP_KERNEL);26212621+ if (!serial->tiocmget)26222622+ goto exit;26212623 serial->tiocmget->serial_state_notification26222624 = kzalloc(sizeof(struct hso_serial_state_notification),26232625 GFP_KERNEL);26242624- /* it isn't going to break our heart if serial->tiocmget26252625- * allocation fails don't bother checking this.26262626- */26272627- if (serial->tiocmget && serial->tiocmget->serial_state_notification) {26282628- tiocmget = serial->tiocmget;26292629- tiocmget->endp = hso_get_ep(interface,26302630- USB_ENDPOINT_XFER_INT,26312631- USB_DIR_IN);26322632- if (!tiocmget->endp) {26332633- dev_err(&interface->dev, "Failed to find INT IN ep\n");26342634- goto exit;26352635- }26362636-26372637- tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);26382638- if (tiocmget->urb) {26392639- mutex_init(&tiocmget->mutex);26402640- init_waitqueue_head(&tiocmget->waitq);26412641- } else26422642- hso_free_tiomget(serial);26262626+ if (!serial->tiocmget->serial_state_notification)26272627+ goto exit;26282628+ tiocmget = serial->tiocmget;26292629+ tiocmget->endp = hso_get_ep(interface,26302630+ USB_ENDPOINT_XFER_INT,26312631+ USB_DIR_IN);26322632+ if (!tiocmget->endp) {26332633+ dev_err(&interface->dev, "Failed to find INT IN ep\n");26342634+ goto exit;26432635 }26442644- }26452645- else26362636+26372637+ tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);26382638+ if (!tiocmget->urb)26392639+ goto exit;26402640+26412641+ mutex_init(&tiocmget->mutex);26422642+ init_waitqueue_head(&tiocmget->waitq);26432643+ } else {26462644 num_urbs = 1;26452645+ }2647264626482647 if (hso_serial_common_create(serial, num_urbs, BULK_URB_RX_SIZE,26492648 BULK_URB_TX_SIZE))
···27792779 */27802780 spin_lock_bh(&ab->base_lock);27812781 peer = ath11k_peer_find(ab, arvif->vdev_id, peer_addr);27822782+27832783+ /* flush the fragments cache during key (re)install to27842784+ * ensure all frags in the new frag list belong to the same key.27852785+ */27862786+ if (peer && cmd == SET_KEY)27872787+ ath11k_peer_frags_flush(ar, peer);27822788 spin_unlock_bh(&ab->base_lock);2783278927842790 if (!peer) {
···31073107 if (ctrl->ctrl.icdoff) {31083108 dev_err(ctrl->ctrl.device, "icdoff %d is not supported!\n",31093109 ctrl->ctrl.icdoff);31103110+ ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR;31103111 goto out_disconnect_admin_queue;31113112 }31123113···31153114 if (!(ctrl->ctrl.sgls & ((1 << 0) | (1 << 1)))) {31163115 dev_err(ctrl->ctrl.device,31173116 "Mandatory sgls are not supported!\n");31173117+ ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR;31183118 goto out_disconnect_admin_queue;31193119 }31203120···32823280 if (ctrl->ctrl.state != NVME_CTRL_CONNECTING)32833281 return;3284328232853285- if (portptr->port_state == FC_OBJSTATE_ONLINE)32833283+ if (portptr->port_state == FC_OBJSTATE_ONLINE) {32863284 dev_info(ctrl->ctrl.device,32873285 "NVME-FC{%d}: reset: Reconnect attempt failed (%d)\n",32883286 ctrl->cnum, status);32893289- else if (time_after_eq(jiffies, rport->dev_loss_end))32873287+ if (status > 0 && (status & NVME_SC_DNR))32883288+ recon = false;32893289+ } else if (time_after_eq(jiffies, rport->dev_loss_end))32903290 recon = false;3291329132923292 if (recon && nvmf_should_reconnect(&ctrl->ctrl)) {···3302329833033299 queue_delayed_work(nvme_wq, &ctrl->connect_work, recon_delay);33043300 } else {33053305- if (portptr->port_state == FC_OBJSTATE_ONLINE)33063306- dev_warn(ctrl->ctrl.device,33073307- "NVME-FC{%d}: Max reconnect attempts (%d) "33083308- "reached.\n",33093309- ctrl->cnum, ctrl->ctrl.nr_reconnects);33103310- else33013301+ if (portptr->port_state == FC_OBJSTATE_ONLINE) {33023302+ if (status > 0 && (status & NVME_SC_DNR))33033303+ dev_warn(ctrl->ctrl.device,33043304+ "NVME-FC{%d}: reconnect failure\n",33053305+ ctrl->cnum);33063306+ else33073307+ dev_warn(ctrl->ctrl.device,33083308+ "NVME-FC{%d}: Max reconnect attempts "33093309+ "(%d) reached.\n",33103310+ ctrl->cnum, ctrl->ctrl.nr_reconnects);33113311+ } else33113312 dev_warn(ctrl->ctrl.device,33123313 "NVME-FC{%d}: dev_loss_tmo (%d) expired "33133314 "while waiting for remoteport connectivity.\n",
+11-4
drivers/nvme/target/core.c
···388388{389389 struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),390390 struct nvmet_ctrl, ka_work);391391- bool cmd_seen = ctrl->cmd_seen;391391+ bool reset_tbkas = ctrl->reset_tbkas;392392393393- ctrl->cmd_seen = false;394394- if (cmd_seen) {393393+ ctrl->reset_tbkas = false;394394+ if (reset_tbkas) {395395 pr_debug("ctrl %d reschedule traffic based keep-alive timer\n",396396 ctrl->cntlid);397397 schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);···804804 percpu_ref_exit(&sq->ref);805805806806 if (ctrl) {807807+ /*808808+ * The teardown flow may take some time, and the host may not809809+ * send us keep-alive during this period, hence reset the810810+ * traffic based keep-alive timer so we don't trigger a811811+ * controller teardown as a result of a keep-alive expiration.812812+ */813813+ ctrl->reset_tbkas = true;807814 nvmet_ctrl_put(ctrl);808815 sq->ctrl = NULL; /* allows reusing the queue later */809816 }···959952 }960953961954 if (sq->ctrl)962962- sq->ctrl->cmd_seen = true;955955+ sq->ctrl->reset_tbkas = true;963956964957 return true;965958
···550550 * nvmet_req_init is completed.551551 */552552 if (queue->rcv_state == NVMET_TCP_RECV_PDU &&553553- len && len < cmd->req.port->inline_data_size &&553553+ len && len <= cmd->req.port->inline_data_size &&554554 nvme_is_write(cmd->req.cmd))555555 return;556556 }
···297297 * e.g. verify that new path is compatible with the current298298 * configuration.299299 */300300- int (*verify_path)(struct dasd_device *, __u8);301300 int (*pe_handler)(struct dasd_device *, __u8, __u8);302301303302 /*
+4
drivers/s390/cio/vfio_ccw_cp.c
···638638 static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);639639 int ret;640640641641+ /* this is an error in the caller */642642+ if (cp->initialized)643643+ return -EBUSY;644644+641645 /*642646 * We only support prefetching the channel program. We assume all channel643647 * programs executed by supported guests likewise support prefetching.
+10-2
drivers/s390/cio/vfio_ccw_drv.c
···8686 struct vfio_ccw_private *private;8787 struct irb *irb;8888 bool is_final;8989+ bool cp_is_finished = false;89909091 private = container_of(work, struct vfio_ccw_private, io_work);9192 irb = &private->irb;···9594 (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));9695 if (scsw_is_solicited(&irb->scsw)) {9796 cp_update_scsw(&private->cp, &irb->scsw);9898- if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING)9797+ if (is_final && private->state == VFIO_CCW_STATE_CP_PENDING) {9998 cp_free(&private->cp);9999+ cp_is_finished = true;100100+ }100101 }101102 mutex_lock(&private->io_mutex);102103 memcpy(private->io_region->irb_area, irb, sizeof(*irb));103104 mutex_unlock(&private->io_mutex);104105105105- if (private->mdev && is_final)106106+ /*107107+ * Reset to IDLE only if processing of a channel program108108+ * has finished. Do not overwrite a possible processing109109+ * state if the final interrupt was for HSCH or CSCH.110110+ */111111+ if (private->mdev && cp_is_finished)106112 private->state = VFIO_CCW_STATE_IDLE;107113108114 if (private->io_trigger)
···12201220 was a result from the ABTS request rather than the CLEANUP12211221 request */12221222 set_bit(BNX2FC_FLAG_IO_CLEANUP, &io_req->req_flags);12231223+ rc = FAILED;12231224 goto done;12241225 }12251226
+4-4
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
···48114811{48124812 int i;4813481348144814- free_irq(pci_irq_vector(pdev, 1), hisi_hba);48154815- free_irq(pci_irq_vector(pdev, 2), hisi_hba);48164816- free_irq(pci_irq_vector(pdev, 11), hisi_hba);48144814+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 1), hisi_hba);48154815+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 2), hisi_hba);48164816+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 11), hisi_hba);48174817 for (i = 0; i < hisi_hba->cq_nvecs; i++) {48184818 struct hisi_sas_cq *cq = &hisi_hba->cq[i];48194819 int nr = hisi_sas_intr_conv ? 16 : 16 + i;4820482048214821- free_irq(pci_irq_vector(pdev, nr), cq);48214821+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);48224822 }48234823 pci_free_irq_vectors(pdev);48244824}
+2-2
drivers/scsi/libsas/sas_port.c
···25252626static void sas_resume_port(struct asd_sas_phy *phy)2727{2828- struct domain_device *dev;2828+ struct domain_device *dev, *n;2929 struct asd_sas_port *port = phy->port;3030 struct sas_ha_struct *sas_ha = phy->ha;3131 struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);···4444 * 1/ presume every device came back4545 * 2/ force the next revalidation to check all expander phys4646 */4747- list_for_each_entry(dev, &port->dev_list, dev_list_node) {4747+ list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) {4848 int i, rc;49495050 rc = sas_notify_lldd_dev_found(dev);
···585585 case BTSTAT_SUCCESS:586586 case BTSTAT_LINKED_COMMAND_COMPLETED:587587 case BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG:588588- /* If everything went fine, let's move on.. */588588+ /*589589+ * Commands like INQUIRY may transfer less data than590590+ * requested by the initiator via bufflen. Set residual591591+ * count to make upper layer aware of the actual amount592592+ * of data returned.593593+ */594594+ scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);589595 cmd->result = (DID_OK << 16);590596 break;591597
+10-2
drivers/soundwire/qcom.c
···1150115011511151 ret = of_property_read_u8_array(np, "qcom,ports-block-pack-mode",11521152 bp_mode, nports);11531153- if (ret)11541154- return ret;11531153+ if (ret) {11541154+ u32 version;11551155+11561156+ ctrl->reg_read(ctrl, SWRM_COMP_HW_VERSION, &version);11571157+11581158+ if (version <= 0x01030000)11591159+ memset(bp_mode, SWR_INVALID_PARAM, QCOM_SDW_MAX_PORTS);11601160+ else11611161+ return ret;11621162+ }1155116311561164 memset(hstart, SWR_INVALID_PARAM, QCOM_SDW_MAX_PORTS);11571165 of_property_read_u8_array(np, "qcom,ports-hstart", hstart, nports);
+2-2
drivers/staging/emxx_udc/emxx_udc.c
···20642064 struct nbu2ss_ep *ep,20652065 int status)20662066{20672067- struct nbu2ss_req *req;20672067+ struct nbu2ss_req *req, *n;2068206820692069 /* Endpoint Disable */20702070 _nbu2ss_epn_exit(udc, ep);···20762076 return 0;2077207720782078 /* called with irqs blocked */20792079- list_for_each_entry(req, &ep->queue, queue) {20792079+ list_for_each_entry_safe(req, n, &ep->queue, queue) {20802080 _nbu2ss_ep_done(ep, req, status);20812081 }20822082
···204204 struct iblock_dev_plug *ib_dev_plug;205205206206 /*207207- * Each se_device has a per cpu work this can be run from. Wwe207207+ * Each se_device has a per cpu work this can be run from. We208208 * shouldn't have multiple threads on the same cpu calling this209209 * at the same time.210210 */211211- ib_dev_plug = &ib_dev->ibd_plug[smp_processor_id()];211211+ ib_dev_plug = &ib_dev->ibd_plug[raw_smp_processor_id()];212212 if (test_and_set_bit(IBD_PLUGF_PLUGGED, &ib_dev_plug->flags))213213 return NULL;214214
···18091809 count = up->tx_loadsz;18101810 do {18111811 serial_out(up, UART_TX, xmit->buf[xmit->tail]);18121812+ if (up->bugs & UART_BUG_TXRACE) {18131813+ /*18141814+ * The Aspeed BMC virtual UARTs have a bug where data18151815+ * may get stuck in the BMC's Tx FIFO from bursts of18161816+ * writes on the APB interface.18171817+ *18181818+ * Delay back-to-back writes by a read cycle to avoid18191819+ * stalling the VUART. Read a register that won't have18201820+ * side-effects and discard the result.18211821+ */18221822+ serial_in(up, UART_SCR);18231823+ }18121824 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);18131825 port->icount.tx++;18141826 if (uart_circ_empty(xmit))
+17-35
drivers/tty/serial/rp2.c
···195195 void __iomem *bar0;196196 void __iomem *bar1;197197 spinlock_t card_lock;198198- struct completion fw_loaded;199198};200199201200#define RP_ID(prod) PCI_VDEVICE(RP, (prod))···661662 card->initialized_ports = 0;662663}663664664664-static void rp2_fw_cb(const struct firmware *fw, void *context)665665+static int rp2_load_firmware(struct rp2_card *card, const struct firmware *fw)665666{666666- struct rp2_card *card = context;667667 resource_size_t phys_base;668668- int i, rc = -ENOENT;669669-670670- if (!fw) {671671- dev_err(&card->pdev->dev, "cannot find '%s' firmware image\n",672672- RP2_FW_NAME);673673- goto no_fw;674674- }668668+ int i, rc = 0;675669676670 phys_base = pci_resource_start(card->pdev, 1);677671···710718 card->initialized_ports++;711719 }712720713713- release_firmware(fw);714714-no_fw:715715- /*716716- * rp2_fw_cb() is called from a workqueue long after rp2_probe()717717- * has already returned success. So if something failed here,718718- * we'll just leave the now-dormant device in place until somebody719719- * unbinds it.720720- */721721- if (rc)722722- dev_warn(&card->pdev->dev, "driver initialization failed\n");723723-724724- complete(&card->fw_loaded);721721+ return rc;725722}726723727724static int rp2_probe(struct pci_dev *pdev,728725 const struct pci_device_id *id)729726{727727+ const struct firmware *fw;730728 struct rp2_card *card;731729 struct rp2_uart_port *ports;732730 void __iomem * const *bars;···727745 return -ENOMEM;728746 pci_set_drvdata(pdev, card);729747 spin_lock_init(&card->card_lock);730730- init_completion(&card->fw_loaded);731748732749 rc = pcim_enable_device(pdev);733750 if (rc)···759778 return -ENOMEM;760779 card->ports = ports;761780781781+ rc = request_firmware(&fw, RP2_FW_NAME, &pdev->dev);782782+ if (rc < 0) {783783+ dev_err(&pdev->dev, "cannot find '%s' firmware image\n",784784+ RP2_FW_NAME);785785+ return rc;786786+ }787787+788788+ rc = rp2_load_firmware(card, fw);789789+790790+ release_firmware(fw);791791+ if (rc < 0)792792+ return rc;793793+762794 rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,763795 IRQF_SHARED, DRV_NAME, card);764796 if (rc)765797 return rc;766766-767767- /*768768- * Only catastrophic errors (e.g. ENOMEM) are reported here.769769- * If the FW image is missing, we'll find out in rp2_fw_cb()770770- * and print an error message.771771- */772772- rc = request_firmware_nowait(THIS_MODULE, 1, RP2_FW_NAME, &pdev->dev,773773- GFP_KERNEL, card, rp2_fw_cb);774774- if (rc)775775- return rc;776776- dev_dbg(&pdev->dev, "waiting for firmware blob...\n");777798778799 return 0;779800}···784801{785802 struct rp2_card *card = pci_get_drvdata(pdev);786803787787- wait_for_completion(&card->fw_loaded);788804 rp2_remove_ports(card);789805}790806
+1-1
drivers/tty/serial/serial-tegra.c
···338338339339 do {340340 lsr = tegra_uart_read(tup, UART_LSR);341341- if ((lsr | UART_LSR_TEMT) && !(lsr & UART_LSR_DR))341341+ if ((lsr & UART_LSR_TEMT) && !(lsr & UART_LSR_DR))342342 break;343343 udelay(1);344344 } while (--tmout);
+5-3
drivers/tty/serial/serial_core.c
···863863 goto check_and_exit;864864 }865865866866- retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);867867- if (retval && (change_irq || change_port))868868- goto exit;866866+ if (change_irq || change_port) {867867+ retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);868868+ if (retval)869869+ goto exit;870870+ }869871870872 /*871873 * Ask the low level driver to verify the settings.
+2-2
drivers/tty/serial/sh-sci.c
···10231023{10241024 unsigned int bits;1025102510261026+ if (rx_trig >= port->fifosize)10271027+ rx_trig = port->fifosize - 1;10261028 if (rx_trig < 1)10271029 rx_trig = 1;10281028- if (rx_trig >= port->fifosize)10291029- rx_trig = port->fifosize;1030103010311031 /* HSCIF can be set to an arbitrary level. */10321032 if (sci_getreg(port, HSRTRGR)->size) {
+3-1
drivers/usb/cdns3/cdns3-gadget.c
···32683268 pm_runtime_get_sync(cdns->dev);3269326932703270 ret = cdns3_gadget_start(cdns);32713271- if (ret)32713271+ if (ret) {32723272+ pm_runtime_put_sync(cdns->dev);32723273 return ret;32743274+ }3273327532743276 /*32753277 * Because interrupt line can be shared with other components in
+7-7
drivers/usb/cdns3/cdnsp-gadget.c
···422422int cdnsp_ep_dequeue(struct cdnsp_ep *pep, struct cdnsp_request *preq)423423{424424 struct cdnsp_device *pdev = pep->pdev;425425- int ret;425425+ int ret_stop = 0;426426+ int ret_rem;426427427428 trace_cdnsp_request_dequeue(preq);428429429429- if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_RUNNING) {430430- ret = cdnsp_cmd_stop_ep(pdev, pep);431431- if (ret)432432- return ret;433433- }430430+ if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_RUNNING)431431+ ret_stop = cdnsp_cmd_stop_ep(pdev, pep);434432435435- return cdnsp_remove_request(pdev, preq, pep);433433+ ret_rem = cdnsp_remove_request(pdev, preq, pep);434434+435435+ return ret_rem ? ret_rem : ret_stop;436436}437437438438static void cdnsp_zero_in_ctx(struct cdnsp_device *pdev)
···12181218 ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));12191219 if (ret)12201220 return ret;12211221- tbuf = kmalloc(len1, GFP_KERNEL);12211221+12221222+ /*12231223+ * len1 can be almost arbitrarily large. Don't WARN if it's12241224+ * too big, just fail the request.12251225+ */12261226+ tbuf = kmalloc(len1, GFP_KERNEL | __GFP_NOWARN);12221227 if (!tbuf) {12231228 ret = -ENOMEM;12241229 goto done;···17011696 if (num_sgs) {17021697 as->urb->sg = kmalloc_array(num_sgs,17031698 sizeof(struct scatterlist),17041704- GFP_KERNEL);16991699+ GFP_KERNEL | __GFP_NOWARN);17051700 if (!as->urb->sg) {17061701 ret = -ENOMEM;17071702 goto error;···17361731 (uurb_start - as->usbm->vm_start);17371732 } else {17381733 as->urb->transfer_buffer = kmalloc(uurb->buffer_length,17391739- GFP_KERNEL);17341734+ GFP_KERNEL | __GFP_NOWARN);17401735 if (!as->urb->transfer_buffer) {17411736 ret = -ENOMEM;17421737 goto error;
+7-6
drivers/usb/dwc3/gadget.c
···12441244 req->start_sg = sg_next(s);1245124512461246 req->num_queued_sgs++;12471247+ req->num_pending_sgs--;1247124812481249 /*12491250 * The number of pending SG entries may not correspond to the···12521251 * don't include unused SG entries.12531252 */12541253 if (length == 0) {12551255- req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;12541254+ req->num_pending_sgs = 0;12561255 break;12571256 }12581257···28742873 struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue];28752874 struct scatterlist *sg = req->sg;28762875 struct scatterlist *s;28772877- unsigned int pending = req->num_pending_sgs;28762876+ unsigned int num_queued = req->num_queued_sgs;28782877 unsigned int i;28792878 int ret = 0;2880287928812881- for_each_sg(sg, s, pending, i) {28802880+ for_each_sg(sg, s, num_queued, i) {28822881 trb = &dep->trb_pool[dep->trb_dequeue];2883288228842883 req->sg = sg_next(s);28852885- req->num_pending_sgs--;28842884+ req->num_queued_sgs--;2886288528872886 ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req,28882887 trb, event, status, true);···2905290429062905static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)29072906{29082908- return req->num_pending_sgs == 0;29072907+ return req->num_pending_sgs == 0 && req->num_queued_sgs == 0;29092908}2910290929112910static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,···29142913{29152914 int ret;2916291529172917- if (req->num_pending_sgs)29162916+ if (req->request.num_mapped_sgs)29182917 ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event,29192918 status);29202919 else
+3-2
drivers/usb/gadget/udc/renesas_usb3.c
···14881488 struct renesas_usb3_request *usb3_req)14891489{14901490 struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep);14911491- struct renesas_usb3_request *usb3_req_first = usb3_get_request(usb3_ep);14911491+ struct renesas_usb3_request *usb3_req_first;14921492 unsigned long flags;14931493 int ret = -EAGAIN;14941494 u32 enable_bits = 0;···14961496 spin_lock_irqsave(&usb3->lock, flags);14971497 if (usb3_ep->halt || usb3_ep->started)14981498 goto out;14991499- if (usb3_req != usb3_req_first)14991499+ usb3_req_first = __usb3_get_request(usb3_ep);15001500+ if (!usb3_req_first || usb3_req != usb3_req_first)15001501 goto out;1501150215021503 if (usb3_pn_change(usb3, usb3_ep->num) < 0)
+7-7
drivers/usb/host/xhci-ring.c
···828828 list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list,829829 cancelled_td_list) {830830831831- /*832832- * Doesn't matter what we pass for status, since the core will833833- * just overwrite it (because the URB has been unlinked).834834- */835831 ring = xhci_urb_to_transfer_ring(ep->xhci, td->urb);836832837833 if (td->cancel_status == TD_CLEARED)838838- xhci_td_cleanup(ep->xhci, td, ring, 0);834834+ xhci_td_cleanup(ep->xhci, td, ring, td->status);839835840836 if (ep->xhci->xhc_state & XHCI_STATE_DYING)841837 return;···933937 continue;934938 }935939 /*936936- * If ring stopped on the TD we need to cancel, then we have to940940+ * If a ring stopped on the TD we need to cancel then we have to937941 * move the xHC endpoint ring dequeue pointer past this TD.942942+ * Rings halted due to STALL may show hw_deq is past the stalled943943+ * TD, but still require a set TR Deq command to flush xHC cache.938944 */939945 hw_deq = xhci_get_hw_deq(xhci, ep->vdev, ep->ep_index,940946 td->urb->stream_id);941947 hw_deq &= ~0xf;942948943943- if (trb_in_td(xhci, td->start_seg, td->first_trb,949949+ if (td->cancel_status == TD_HALTED) {950950+ cached_td = td;951951+ } else if (trb_in_td(xhci, td->start_seg, td->first_trb,944952 td->last_trb, hw_deq, false)) {945953 switch (td->cancel_status) {946954 case TD_CLEARED: /* TD is already no-op */
···191191 bool match;192192 int nval;193193 u16 *val;194194+ int ret;194195 int i;195196196197 /*···219218 if (!val)220219 return ERR_PTR(-ENOMEM);221220222222- nval = fwnode_property_read_u16_array(fwnode, "svid", val, nval);223223- if (nval < 0) {221221+ ret = fwnode_property_read_u16_array(fwnode, "svid", val, nval);222222+ if (ret < 0) {224223 kfree(val);225225- return ERR_PTR(nval);224224+ return ERR_PTR(ret);226225 }227226228227 for (i = 0; i < nval; i++) {
+36-3
drivers/usb/typec/tcpm/tcpm.c
···15501550 if (PD_VDO_SVDM_VER(p[0]) < svdm_version)15511551 typec_partner_set_svdm_version(port->partner,15521552 PD_VDO_SVDM_VER(p[0]));15531553+15541554+ tcpm_ams_start(port, DISCOVER_IDENTITY);15531555 /* 6.4.4.3.1: Only respond as UFP (device) */15541556 if (port->data_role == TYPEC_DEVICE &&15551557 port->nr_snk_vdo) {···15701568 }15711569 break;15721570 case CMD_DISCOVER_SVID:15711571+ tcpm_ams_start(port, DISCOVER_SVIDS);15731572 break;15741573 case CMD_DISCOVER_MODES:15741574+ tcpm_ams_start(port, DISCOVER_MODES);15751575 break;15761576 case CMD_ENTER_MODE:15771577+ tcpm_ams_start(port, DFP_TO_UFP_ENTER_MODE);15771578 break;15781579 case CMD_EXIT_MODE:15801580+ tcpm_ams_start(port, DFP_TO_UFP_EXIT_MODE);15791581 break;15801582 case CMD_ATTENTION:15831583+ tcpm_ams_start(port, ATTENTION);15811584 /* Attention command does not have response */15821585 *adev_action = ADEV_ATTENTION;15831586 return 0;···22942287 bool frs_enable;22952288 int ret;2296228922902290+ if (tcpm_vdm_ams(port) && type != PD_DATA_VENDOR_DEF) {22912291+ port->vdm_state = VDM_STATE_ERR_BUSY;22922292+ tcpm_ams_finish(port);22932293+ mod_vdm_delayed_work(port, 0);22942294+ }22952295+22972296 switch (type) {22982297 case PD_DATA_SOURCE_CAP:22992298 for (i = 0; i < cnt; i++)···24302417 NONE_AMS);24312418 break;24322419 case PD_DATA_VENDOR_DEF:24332433- tcpm_handle_vdm_request(port, msg->payload, cnt);24202420+ if (tcpm_vdm_ams(port) || port->nr_snk_vdo)24212421+ tcpm_handle_vdm_request(port, msg->payload, cnt);24222422+ else if (port->negotiated_rev > PD_REV20)24232423+ tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS);24342424 break;24352425 case PD_DATA_BIST:24362426 port->bist_request = le32_to_cpu(msg->payload[0]);···24742458{24752459 enum pd_ctrl_msg_type type = pd_header_type_le(msg->header);24762460 enum tcpm_state next_state;24612461+24622462+ /*24632463+ * Stop VDM state machine if interrupted by other Messages while NOT_SUPP is allowed in24642464+ * VDM AMS if waiting for VDM responses and will be handled later.24652465+ */24662466+ if (tcpm_vdm_ams(port) && type != PD_CTRL_NOT_SUPP && type != PD_CTRL_GOOD_CRC) {24672467+ port->vdm_state = VDM_STATE_ERR_BUSY;24682468+ tcpm_ams_finish(port);24692469+ mod_vdm_delayed_work(port, 0);24702470+ }2477247124782472 switch (type) {24792473 case PD_CTRL_GOOD_CRC:···27432717 enum pd_ext_msg_type type = pd_header_type_le(msg->header);27442718 unsigned int data_size = pd_ext_header_data_size_le(msg->ext_msg.header);2745271927462746- if (!(msg->ext_msg.header & PD_EXT_HDR_CHUNKED)) {27202720+ /* stopping VDM state machine if interrupted by other Messages */27212721+ if (tcpm_vdm_ams(port)) {27222722+ port->vdm_state = VDM_STATE_ERR_BUSY;27232723+ tcpm_ams_finish(port);27242724+ mod_vdm_delayed_work(port, 0);27252725+ }27262726+27272727+ if (!(le16_to_cpu(msg->ext_msg.header) & PD_EXT_HDR_CHUNKED)) {27472728 tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS);27482729 tcpm_log(port, "Unchunked extended messages unsupported");27492730 return;···28442811 "Data role mismatch, initiating error recovery");28452812 tcpm_set_state(port, ERROR_RECOVERY, 0);28462813 } else {28472847- if (msg->header & PD_HEADER_EXT_HDR)28142814+ if (le16_to_cpu(msg->header) & PD_HEADER_EXT_HDR)28482815 tcpm_pd_ext_msg_request(port, msg);28492816 else if (cnt)28502817 tcpm_pd_data_request(port, msg);
···558558 int ret;559559560560 ret = hga_card_detect();561561- if (!ret)561561+ if (ret)562562 return ret;563563564564 printk(KERN_INFO "hgafb: %s with %ldK of memory detected.\n",
···7272} __packed;73737474/*7575- * Dump full key (32 byte encrypt/decrypt keys instead of 16 bytes)7676- * is needed if GCM256 (stronger encryption) negotiated7575+ * Dump variable-sized keys7776 */7877struct smb3_full_key_debug_info {7979- __u64 Suid;7878+ /* INPUT: size of userspace buffer */7979+ __u32 in_size;8080+8181+ /*8282+ * INPUT: 0 for current user, otherwise session to dump8383+ * OUTPUT: session id that was dumped8484+ */8585+ __u64 session_id;8086 __u16 cipher_type;8181- __u8 auth_key[16]; /* SMB2_NTLMV2_SESSKEY_SIZE */8282- __u8 smb3encryptionkey[32]; /* SMB3_ENC_DEC_KEY_SIZE */8383- __u8 smb3decryptionkey[32]; /* SMB3_ENC_DEC_KEY_SIZE */8787+ __u8 session_key_length;8888+ __u8 server_in_key_length;8989+ __u8 server_out_key_length;9090+ __u8 data[];9191+ /*9292+ * return this struct with the keys appended at the end:9393+ * __u8 session_key[session_key_length];9494+ * __u8 server_in_key[server_in_key_length];9595+ * __u8 server_out_key[server_out_key_length];9696+ */8497} __packed;85988699struct smb3_notify {
+2-1
fs/cifs/cifspdu.h
···148148#define SMB3_SIGN_KEY_SIZE (16)149149150150/*151151- * Size of the smb3 encryption/decryption keys151151+ * Size of the smb3 encryption/decryption key storage.152152+ * This size is big enough to store any cipher key types.152153 */153154#define SMB3_ENC_DEC_KEY_SIZE (32)154155
+104-37
fs/cifs/ioctl.c
···3333#include "cifsfs.h"3434#include "cifs_ioctl.h"3535#include "smb2proto.h"3636+#include "smb2glob.h"3637#include <linux/btrfs.h>37383839static long cifs_ioctl_query_info(unsigned int xid, struct file *filep,···215214 return 0;216215}217216218218-static int cifs_dump_full_key(struct cifs_tcon *tcon, unsigned long arg)217217+static int cifs_dump_full_key(struct cifs_tcon *tcon, struct smb3_full_key_debug_info __user *in)219218{220220- struct smb3_full_key_debug_info pfull_key_inf;221221- __u64 suid;222222- struct list_head *tmp;219219+ struct smb3_full_key_debug_info out;223220 struct cifs_ses *ses;221221+ int rc = 0;224222 bool found = false;223223+ u8 __user *end;225224226226- if (!smb3_encryption_required(tcon))227227- return -EOPNOTSUPP;225225+ if (!smb3_encryption_required(tcon)) {226226+ rc = -EOPNOTSUPP;227227+ goto out;228228+ }228229229229- ses = tcon->ses; /* default to user id for current user */230230- if (get_user(suid, (__u64 __user *)arg))231231- suid = 0;232232- if (suid) {233233- /* search to see if there is a session with a matching SMB UID */230230+ /* copy user input into our output buffer */231231+ if (copy_from_user(&out, in, sizeof(out))) {232232+ rc = -EINVAL;233233+ goto out;234234+ }235235+236236+ if (!out.session_id) {237237+ /* if ses id is 0, use current user session */238238+ ses = tcon->ses;239239+ } else {240240+ /* otherwise if a session id is given, look for it in all our sessions */241241+ struct cifs_ses *ses_it = NULL;242242+ struct TCP_Server_Info *server_it = NULL;243243+234244 spin_lock(&cifs_tcp_ses_lock);235235- list_for_each(tmp, &tcon->ses->server->smb_ses_list) {236236- ses = list_entry(tmp, struct cifs_ses, smb_ses_list);237237- if (ses->Suid == suid) {238238- found = true;239239- break;245245+ list_for_each_entry(server_it, &cifs_tcp_ses_list, tcp_ses_list) {246246+ list_for_each_entry(ses_it, &server_it->smb_ses_list, smb_ses_list) {247247+ if (ses_it->Suid == out.session_id) {248248+ ses = ses_it;249249+ /*250250+ * since we are using the session outside the crit251251+ * section, we need to make sure it won't be released252252+ * so increment its refcount253253+ */254254+ ses->ses_count++;255255+ found = true;256256+ goto search_end;257257+ }240258 }241259 }260260+search_end:242261 spin_unlock(&cifs_tcp_ses_lock);243243- if (found == false)244244- return -EINVAL;245245- } /* else uses default user's SMB UID (ie current user) */262262+ if (!found) {263263+ rc = -ENOENT;264264+ goto out;265265+ }266266+ }246267247247- pfull_key_inf.cipher_type = le16_to_cpu(ses->server->cipher_type);248248- pfull_key_inf.Suid = ses->Suid;249249- memcpy(pfull_key_inf.auth_key, ses->auth_key.response,250250- 16 /* SMB2_NTLMV2_SESSKEY_SIZE */);251251- memcpy(pfull_key_inf.smb3decryptionkey, ses->smb3decryptionkey,252252- 32 /* SMB3_ENC_DEC_KEY_SIZE */);253253- memcpy(pfull_key_inf.smb3encryptionkey,254254- ses->smb3encryptionkey, 32 /* SMB3_ENC_DEC_KEY_SIZE */);255255- if (copy_to_user((void __user *)arg, &pfull_key_inf,256256- sizeof(struct smb3_full_key_debug_info)))257257- return -EFAULT;268268+ switch (ses->server->cipher_type) {269269+ case SMB2_ENCRYPTION_AES128_CCM:270270+ case SMB2_ENCRYPTION_AES128_GCM:271271+ out.session_key_length = CIFS_SESS_KEY_SIZE;272272+ out.server_in_key_length = out.server_out_key_length = SMB3_GCM128_CRYPTKEY_SIZE;273273+ break;274274+ case SMB2_ENCRYPTION_AES256_CCM:275275+ case SMB2_ENCRYPTION_AES256_GCM:276276+ out.session_key_length = CIFS_SESS_KEY_SIZE;277277+ out.server_in_key_length = out.server_out_key_length = SMB3_GCM256_CRYPTKEY_SIZE;278278+ break;279279+ default:280280+ rc = -EOPNOTSUPP;281281+ goto out;282282+ }258283259259- return 0;284284+ /* check if user buffer is big enough to store all the keys */285285+ if (out.in_size < sizeof(out) + out.session_key_length + out.server_in_key_length286286+ + out.server_out_key_length) {287287+ rc = -ENOBUFS;288288+ goto out;289289+ }290290+291291+ out.session_id = ses->Suid;292292+ out.cipher_type = le16_to_cpu(ses->server->cipher_type);293293+294294+ /* overwrite user input with our output */295295+ if (copy_to_user(in, &out, sizeof(out))) {296296+ rc = -EINVAL;297297+ goto out;298298+ }299299+300300+ /* append all the keys at the end of the user buffer */301301+ end = in->data;302302+ if (copy_to_user(end, ses->auth_key.response, out.session_key_length)) {303303+ rc = -EINVAL;304304+ goto out;305305+ }306306+ end += out.session_key_length;307307+308308+ if (copy_to_user(end, ses->smb3encryptionkey, out.server_in_key_length)) {309309+ rc = -EINVAL;310310+ goto out;311311+ }312312+ end += out.server_in_key_length;313313+314314+ if (copy_to_user(end, ses->smb3decryptionkey, out.server_out_key_length)) {315315+ rc = -EINVAL;316316+ goto out;317317+ }318318+319319+out:320320+ if (found)321321+ cifs_put_smb_ses(ses);322322+ return rc;260323}261324262325long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)···436371 rc = -EOPNOTSUPP;437372 break;438373 case CIFS_DUMP_KEY:374374+ /*375375+ * Dump encryption keys. This is an old ioctl that only376376+ * handles AES-128-{CCM,GCM}.377377+ */439378 if (pSMBFile == NULL)440379 break;441380 if (!capable(CAP_SYS_ADMIN)) {···467398 else468399 rc = 0;469400 break;470470- /*471471- * Dump full key (32 bytes instead of 16 bytes) is472472- * needed if GCM256 (stronger encryption) negotiated473473- */474401 case CIFS_DUMP_FULL_KEY:402402+ /*403403+ * Dump encryption keys (handles any key sizes)404404+ */475405 if (pSMBFile == NULL)476406 break;477407 if (!capable(CAP_SYS_ADMIN)) {···478410 break;479411 }480412 tcon = tlink_tcon(pSMBFile->tlink);481481- rc = cifs_dump_full_key(tcon, arg);482482-413413+ rc = cifs_dump_full_key(tcon, (void __user *)arg);483414 break;484415 case CIFS_IOC_NOTIFY:485416 if (!S_ISDIR(inode->i_mode)) {
+7
fs/cifs/smb2pdu.c
···958958 /* Internal types */959959 server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES;960960961961+ /*962962+ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context963963+ * Set the cipher type manually.964964+ */965965+ if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))966966+ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;967967+961968 security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,962969 (struct smb2_sync_hdr *)rsp);963970 /*
···4545static int debugfs_setattr(struct user_namespace *mnt_userns,4646 struct dentry *dentry, struct iattr *ia)4747{4848- int ret = security_locked_down(LOCKDOWN_DEBUGFS);4848+ int ret;49495050- if (ret && (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID)))5151- return ret;5050+ if (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID)) {5151+ ret = security_locked_down(LOCKDOWN_DEBUGFS);5252+ if (ret)5353+ return ret;5454+ }5255 return simple_setattr(&init_user_ns, dentry, ia);5356}5457
+7-2
fs/gfs2/file.c
···540540 struct gfs2_inode *ip = GFS2_I(inode);541541 struct gfs2_holder gh;542542 vm_fault_t ret;543543+ u16 state;543544 int err;544545545545- gfs2_holder_init(ip->i_gl, LM_ST_SHARED, 0, &gh);546546+ state = (vmf->flags & FAULT_FLAG_WRITE) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;547547+ gfs2_holder_init(ip->i_gl, state, 0, &gh);546548 err = gfs2_glock_nq(&gh);547549 if (err) {548550 ret = block_page_mkwrite_return(err);···913911 current->backing_dev_info = inode_to_bdi(inode);914912 buffered = iomap_file_buffered_write(iocb, from, &gfs2_iomap_ops);915913 current->backing_dev_info = NULL;916916- if (unlikely(buffered <= 0))914914+ if (unlikely(buffered <= 0)) {915915+ if (!ret)916916+ ret = buffered;917917 goto out_unlock;918918+ }918919919920 /*920921 * We need to ensure that the page cache pages are written to
+24-4
fs/gfs2/glock.c
···582582 spin_unlock(&gl->gl_lockref.lock);583583}584584585585+static bool is_system_glock(struct gfs2_glock *gl)586586+{587587+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;588588+ struct gfs2_inode *m_ip = GFS2_I(sdp->sd_statfs_inode);589589+590590+ if (gl == m_ip->i_gl)591591+ return true;592592+ return false;593593+}594594+585595/**586596 * do_xmote - Calls the DLM to change the state of a lock587597 * @gl: The lock state···681671 * to see sd_log_error and withdraw, and in the meantime, requeue the682672 * work for later.683673 *674674+ * We make a special exception for some system glocks, such as the675675+ * system statfs inode glock, which needs to be granted before the676676+ * gfs2_quotad daemon can exit, and that exit needs to finish before677677+ * we can unmount the withdrawn file system.678678+ *684679 * However, if we're just unlocking the lock (say, for unmount, when685680 * gfs2_gl_hash_clear calls clear_glock) and recovery is complete686681 * then it's okay to tell dlm to unlock it.687682 */688683 if (unlikely(sdp->sd_log_error && !gfs2_withdrawn(sdp)))689684 gfs2_withdraw_delayed(sdp);690690- if (glock_blocked_by_withdraw(gl)) {691691- if (target != LM_ST_UNLOCKED ||692692- test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags)) {685685+ if (glock_blocked_by_withdraw(gl) &&686686+ (target != LM_ST_UNLOCKED ||687687+ test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags))) {688688+ if (!is_system_glock(gl)) {693689 gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);694690 goto out;691691+ } else {692692+ clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);695693 }696694 }697695···14841466 glock_blocked_by_withdraw(gl) &&14851467 gh->gh_gl != sdp->sd_jinode_gl) {14861468 sdp->sd_glock_dqs_held++;14691469+ spin_unlock(&gl->gl_lockref.lock);14871470 might_sleep();14881471 wait_on_bit(&sdp->sd_flags, SDF_WITHDRAW_RECOVERY,14891472 TASK_UNINTERRUPTIBLE);14731473+ spin_lock(&gl->gl_lockref.lock);14901474 }14911475 if (gh->gh_flags & GL_NOCACHE)14921476 handle_callback(gl, LM_ST_UNLOCKED, 0, false);···17951775 while(!list_empty(list)) {17961776 gl = list_first_entry(list, struct gfs2_glock, gl_lru);17971777 list_del_init(&gl->gl_lru);17781778+ clear_bit(GLF_LRU, &gl->gl_flags);17981779 if (!spin_trylock(&gl->gl_lockref.lock)) {17991780add_back_to_lru:18001781 list_add(&gl->gl_lru, &lru_list);···18411820 if (!test_bit(GLF_LOCK, &gl->gl_flags)) {18421821 list_move(&gl->gl_lru, &dispose);18431822 atomic_dec(&lru_count);18441844- clear_bit(GLF_LRU, &gl->gl_flags);18451823 freed++;18461824 continue;18471825 }
···90399039 struct io_tctx_node *node;90409040 unsigned long index;9041904190429042- tctx->io_wq = NULL;90439042 xa_for_each(&tctx->xa, index, node)90449043 io_uring_del_task_file(index);90459045- if (wq)90449044+ if (wq) {90459045+ /*90469046+ * Must be after io_uring_del_task_file() (removes nodes under90479047+ * uring_lock) to avoid race with io_uring_try_cancel_iowq().90489048+ */90499049+ tctx->io_wq = NULL;90469050 io_wq_put_and_exit(wq);90519051+ }90479052}9048905390499054static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)···9083907890849079 if (!current->io_uring)90859080 return;90819081+ if (tctx->io_wq)90829082+ io_wq_exit_start(tctx->io_wq);90839083+90869084 WARN_ON_ONCE(!sqd || sqd->thread != current);9087908590889086 atomic_inc(&tctx->in_idle);···91199111 struct io_uring_task *tctx = current->io_uring;91209112 DEFINE_WAIT(wait);91219113 s64 inflight;91149114+91159115+ if (tctx->io_wq)91169116+ io_wq_exit_start(tctx->io_wq);9122911791239118 /* make sure overflow events are dropped */91249119 atomic_inc(&tctx->in_idle);
+1-1
fs/netfs/Kconfig
···11# SPDX-License-Identifier: GPL-2.0-only2233config NETFS_SUPPORT44- tristate "Support for network filesystem high-level I/O"44+ tristate55 help66 This option enables support for network filesystems, including77 helpers for high-level buffered I/O, abstracting out read
···718718 if (unlikely(!p))719719 goto out_err;720720 fl->fh_array[i]->size = be32_to_cpup(p++);721721- if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {721721+ if (fl->fh_array[i]->size > NFS_MAXFHSIZE) {722722 printk(KERN_ERR "NFS: Too big fh %d received %d\n",723723 i, fl->fh_array[i]->size);724724 goto out_err;
···211211 case SEEK_HOLE:212212 case SEEK_DATA:213213 ret = nfs42_proc_llseek(filep, offset, whence);214214- if (ret != -ENOTSUPP)214214+ if (ret != -EOPNOTSUPP)215215 return ret;216216 fallthrough;217217 default:
+2-2
fs/nfs/nfs4proc.c
···17061706 rcu_read_unlock();17071707 trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);1708170817091709- if (!signal_pending(current)) {17091709+ if (!fatal_signal_pending(current)) {17101710 if (schedule_timeout(5*HZ) == 0)17111711 status = -EAGAIN;17121712 else···34873487 write_sequnlock(&state->seqlock);34883488 trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);3489348934903490- if (signal_pending(current))34903490+ if (fatal_signal_pending(current))34913491 status = -EINTR;34923492 else34933493 if (schedule_timeout(5*HZ) != 0)
+6-14
fs/nfs/pagelist.c
···10941094 struct nfs_page *prev = NULL;10951095 unsigned int size;1096109610971097- if (mirror->pg_count != 0) {10981098- prev = nfs_list_entry(mirror->pg_list.prev);10991099- } else {10971097+ if (list_empty(&mirror->pg_list)) {11001098 if (desc->pg_ops->pg_init)11011099 desc->pg_ops->pg_init(desc, req);11021100 if (desc->pg_error < 0)11031101 return 0;11041102 mirror->pg_base = req->wb_pgbase;11051105- }11031103+ mirror->pg_count = 0;11041104+ mirror->pg_recoalesce = 0;11051105+ } else11061106+ prev = nfs_list_entry(mirror->pg_list.prev);1106110711071108 if (desc->pg_maxretrans && req->wb_nio > desc->pg_maxretrans) {11081109 if (NFS_SERVER(desc->pg_inode)->flags & NFS_MOUNT_SOFTERR)···11281127{11291128 struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);1130112911311131-11321130 if (!list_empty(&mirror->pg_list)) {11331131 int error = desc->pg_ops->pg_doio(desc);11341132 if (error < 0)11351133 desc->pg_error = error;11361136- else11341134+ if (list_empty(&mirror->pg_list))11371135 mirror->pg_bytes_written += mirror->pg_count;11381138- }11391139- if (list_empty(&mirror->pg_list)) {11401140- mirror->pg_count = 0;11411141- mirror->pg_base = 0;11421136 }11431137}11441138···1223122712241228 do {12251229 list_splice_init(&mirror->pg_list, &head);12261226- mirror->pg_bytes_written -= mirror->pg_count;12271227- mirror->pg_count = 0;12281228- mirror->pg_base = 0;12291229- mirror->pg_recoalesce = 0;1230123012311231 while (!list_empty(&head)) {12321232 struct nfs_page *req;
···424424 * events generated by the listener process itself, without disclosing425425 * the pids of other processes.426426 */427427- if (!capable(CAP_SYS_ADMIN) &&427427+ if (FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&428428 task_tgid(current) != event->pid)429429 metadata.pid = 0;430430431431- if (path && path->mnt && path->dentry) {431431+ /*432432+ * For now, fid mode is required for an unprivileged listener and433433+ * fid mode does not report fd in events. Keep this check anyway434434+ * for safety in case fid mode requirement is relaxed in the future435435+ * to allow unprivileged listener to get events with no fd and no fid.436436+ */437437+ if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&438438+ path && path->mnt && path->dentry) {432439 fd = create_fd(group, path, &f);433440 if (fd < 0)434441 return fd;···10471040 int f_flags, fd;10481041 unsigned int fid_mode = flags & FANOTIFY_FID_BITS;10491042 unsigned int class = flags & FANOTIFY_CLASS_BITS;10431043+ unsigned int internal_flags = 0;1050104410511045 pr_debug("%s: flags=%x event_f_flags=%x\n",10521046 __func__, flags, event_f_flags);···10611053 */10621054 if ((flags & FANOTIFY_ADMIN_INIT_FLAGS) || !fid_mode)10631055 return -EPERM;10561056+10571057+ /*10581058+ * Setting the internal flag FANOTIFY_UNPRIV on the group10591059+ * prevents setting mount/filesystem marks on this group and10601060+ * prevents reporting pid and open fd in events.10611061+ */10621062+ internal_flags |= FANOTIFY_UNPRIV;10641063 }1065106410661065#ifdef CONFIG_AUDITSYSCALL···11201105 goto out_destroy_group;11211106 }1122110711231123- group->fanotify_data.flags = flags;11081108+ group->fanotify_data.flags = flags | internal_flags;11241109 group->memcg = get_mem_cgroup_from_mm(current->mm);1125111011261111 group->fanotify_data.merge_hash = fanotify_alloc_merge_hash();···13201305 group = f.file->private_data;1321130613221307 /*13231323- * An unprivileged user is not allowed to watch a mount point nor13241324- * a filesystem.13081308+ * An unprivileged user is not allowed to setup mount nor filesystem13091309+ * marks. This also includes setting up such marks by a group that13101310+ * was initialized by an unprivileged user.13251311 */13261312 ret = -EPERM;13271327- if (!capable(CAP_SYS_ADMIN) &&13131313+ if ((!capable(CAP_SYS_ADMIN) ||13141314+ FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV)) &&13281315 mark_type != FAN_MARK_INODE)13291316 goto fput_and_out;13301317···14771460 max_marks = clamp(max_marks, FANOTIFY_OLD_DEFAULT_MAX_MARKS,14781461 FANOTIFY_DEFAULT_MAX_USER_MARKS);1479146214631463+ BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS);14801464 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 10);14811465 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 9);14821466
···27032703 void *page;27042704 int rv;2705270527062706+ /* A task may only write when it was the opener. */27072707+ if (file->f_cred != current_real_cred())27082708+ return -EPERM;27092709+27062710 rcu_read_lock();27072711 task = pid_task(proc_pid(inode), PIDTYPE_PID);27082712 if (!task) {
+15-3
fs/xfs/libxfs/xfs_ag_resv.c
···325325 error2 = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0);326326 if (error2)327327 return error2;328328- ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +329329- xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <=330330- pag->pagf_freeblks + pag->pagf_flcount);328328+329329+ /*330330+ * If there isn't enough space in the AG to satisfy the331331+ * reservation, let the caller know that there wasn't enough332332+ * space. Callers are responsible for deciding what to do333333+ * next, since (in theory) we can stumble along with334334+ * insufficient reservation if data blocks are being freed to335335+ * replenish the AG's free space.336336+ */337337+ if (!error &&338338+ xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +339339+ xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved >340340+ pag->pagf_freeblks + pag->pagf_flcount)341341+ error = -ENOSPC;331342 }343343+332344 return error;333345}334346
-12
fs/xfs/libxfs/xfs_bmap.c
···605605606606 ASSERT(cur);607607 ASSERT(whichfork != XFS_COW_FORK);608608- ASSERT(!xfs_need_iread_extents(ifp));609608 ASSERT(ifp->if_format == XFS_DINODE_FMT_BTREE);610609 ASSERT(be16_to_cpu(rblock->bb_level) == 1);611610 ASSERT(be16_to_cpu(rblock->bb_numrecs) == 1);···53495350 xfs_fsblock_t sum;53505351 xfs_filblks_t len = *rlen; /* length to unmap in file */53515352 xfs_fileoff_t max_len;53525352- xfs_agnumber_t prev_agno = NULLAGNUMBER, agno;53535353 xfs_fileoff_t end;53545354 struct xfs_iext_cursor icur;53555355 bool done = false;···54405442 del = got;54415443 wasdel = isnullstartblock(del.br_startblock);5442544454435443- /*54445444- * Make sure we don't touch multiple AGF headers out of order54455445- * in a single transaction, as that could cause AB-BA deadlocks.54465446- */54475447- if (!wasdel && !isrt) {54485448- agno = XFS_FSB_TO_AGNO(mp, del.br_startblock);54495449- if (prev_agno != NULLAGNUMBER && prev_agno > agno)54505450- break;54515451- prev_agno = agno;54525452- }54535445 if (got.br_startoff < start) {54545446 del.br_startoff = start;54555447 del.br_blockcount -= start - got.br_startoff;
+42-4
fs/xfs/libxfs/xfs_inode_buf.c
···559559/*560560 * Validate di_extsize hint.561561 *562562- * The rules are documented at xfs_ioctl_setattr_check_extsize().563563- * These functions must be kept in sync with each other.562562+ * 1. Extent size hint is only valid for directories and regular files.563563+ * 2. FS_XFLAG_EXTSIZE is only valid for regular files.564564+ * 3. FS_XFLAG_EXTSZINHERIT is only valid for directories.565565+ * 4. Hint cannot be larger than MAXTEXTLEN.566566+ * 5. Can be changed on directories at any time.567567+ * 6. Hint value of 0 turns off hints, clears inode flags.568568+ * 7. Extent size must be a multiple of the appropriate block size.569569+ * For realtime files, this is the rt extent size.570570+ * 8. For non-realtime files, the extent size hint must be limited571571+ * to half the AG size to avoid alignment extending the extent beyond the572572+ * limits of the AG.564573 */565574xfs_failaddr_t566575xfs_inode_validate_extsize(···588579 hint_flag = (flags & XFS_DIFLAG_EXTSIZE);589580 inherit_flag = (flags & XFS_DIFLAG_EXTSZINHERIT);590581 extsize_bytes = XFS_FSB_TO_B(mp, extsize);582582+583583+ /*584584+ * This comment describes a historic gap in this verifier function.585585+ *586586+ * On older kernels, the extent size hint verifier doesn't check that587587+ * the extent size hint is an integer multiple of the realtime extent588588+ * size on a directory with both RTINHERIT and EXTSZINHERIT flags set.589589+ * The verifier has always enforced the alignment rule for regular590590+ * files with the REALTIME flag set.591591+ *592592+ * If a directory with a misaligned extent size hint is allowed to593593+ * propagate that hint into a new regular realtime file, the result594594+ * is that the inode cluster buffer verifier will trigger a corruption595595+ * shutdown the next time it is run.596596+ *597597+ * Unfortunately, there could be filesystems with these misconfigured598598+ * directories in the wild, so we cannot add a check to this verifier599599+ * at this time because that will result a new source of directory600600+ * corruption errors when reading an existing filesystem. Instead, we601601+ * permit the misconfiguration to pass through the verifiers so that602602+ * callers of this function can correct and mitigate externally.603603+ */591604592605 if (rt_flag)593606 blocksize_bytes = mp->m_sb.sb_rextsize << mp->m_sb.sb_blocklog;···647616/*648617 * Validate di_cowextsize hint.649618 *650650- * The rules are documented at xfs_ioctl_setattr_check_cowextsize().651651- * These functions must be kept in sync with each other.619619+ * 1. CoW extent size hint can only be set if reflink is enabled on the fs.620620+ * The inode does not have to have any shared blocks, but it must be a v3.621621+ * 2. FS_XFLAG_COWEXTSIZE is only valid for directories and regular files;622622+ * for a directory, the hint is propagated to new files.623623+ * 3. Can be changed on files & directories at any time.624624+ * 4. Hint value of 0 turns off hints, clears inode flags.625625+ * 5. Extent size must be a multiple of the appropriate block size.626626+ * 6. The extent size hint must be limited to half the AG size to avoid627627+ * alignment extending the extent beyond the limits of the AG.652628 */653629xfs_failaddr_t654630xfs_inode_validate_cowextsize(
+17
fs/xfs/libxfs/xfs_trans_inode.c
···143143 }144144145145 /*146146+ * Inode verifiers on older kernels don't check that the extent size147147+ * hint is an integer multiple of the rt extent size on a directory148148+ * with both rtinherit and extszinherit flags set. If we're logging a149149+ * directory that is misconfigured in this way, clear the hint.150150+ */151151+ if ((ip->i_diflags & XFS_DIFLAG_RTINHERIT) &&152152+ (ip->i_diflags & XFS_DIFLAG_EXTSZINHERIT) &&153153+ (ip->i_extsize % ip->i_mount->m_sb.sb_rextsize) > 0) {154154+ xfs_info_once(ip->i_mount,155155+ "Correcting misaligned extent size hint in inode 0x%llx.", ip->i_ino);156156+ ip->i_diflags &= ~(XFS_DIFLAG_EXTSIZE |157157+ XFS_DIFLAG_EXTSZINHERIT);158158+ ip->i_extsize = 0;159159+ flags |= XFS_ILOG_CORE;160160+ }161161+162162+ /*146163 * Record the specific change for fdatasync optimisation. This allows147164 * fdatasync to skip log forces for inodes that are only timestamp148165 * dirty.
+29
fs/xfs/xfs_inode.c
···690690 const struct xfs_inode *pip)691691{692692 unsigned int di_flags = 0;693693+ xfs_failaddr_t failaddr;693694 umode_t mode = VFS_I(ip)->i_mode;694695695696 if (S_ISDIR(mode)) {···730729 di_flags |= XFS_DIFLAG_FILESTREAM;731730732731 ip->i_diflags |= di_flags;732732+733733+ /*734734+ * Inode verifiers on older kernels only check that the extent size735735+ * hint is an integer multiple of the rt extent size on realtime files.736736+ * They did not check the hint alignment on a directory with both737737+ * rtinherit and extszinherit flags set. If the misaligned hint is738738+ * propagated from a directory into a new realtime file, new file739739+ * allocations will fail due to math errors in the rt allocator and/or740740+ * trip the verifiers. Validate the hint settings in the new file so741741+ * that we don't let broken hints propagate.742742+ */743743+ failaddr = xfs_inode_validate_extsize(ip->i_mount, ip->i_extsize,744744+ VFS_I(ip)->i_mode, ip->i_diflags);745745+ if (failaddr) {746746+ ip->i_diflags &= ~(XFS_DIFLAG_EXTSIZE |747747+ XFS_DIFLAG_EXTSZINHERIT);748748+ ip->i_extsize = 0;749749+ }733750}734751735752/* Propagate di_flags2 from a parent inode to a child inode. */···756737 struct xfs_inode *ip,757738 const struct xfs_inode *pip)758739{740740+ xfs_failaddr_t failaddr;741741+759742 if (pip->i_diflags2 & XFS_DIFLAG2_COWEXTSIZE) {760743 ip->i_diflags2 |= XFS_DIFLAG2_COWEXTSIZE;761744 ip->i_cowextsize = pip->i_cowextsize;762745 }763746 if (pip->i_diflags2 & XFS_DIFLAG2_DAX)764747 ip->i_diflags2 |= XFS_DIFLAG2_DAX;748748+749749+ /* Don't let invalid cowextsize hints propagate. */750750+ failaddr = xfs_inode_validate_cowextsize(ip->i_mount, ip->i_cowextsize,751751+ VFS_I(ip)->i_mode, ip->i_diflags, ip->i_diflags2);752752+ if (failaddr) {753753+ ip->i_diflags2 &= ~XFS_DIFLAG2_COWEXTSIZE;754754+ ip->i_cowextsize = 0;755755+ }765756}766757767758/*
+34-67
fs/xfs/xfs_ioctl.c
···12671267}1268126812691269/*12701270- * extent size hint validation is somewhat cumbersome. Rules are:12711271- *12721272- * 1. extent size hint is only valid for directories and regular files12731273- * 2. FS_XFLAG_EXTSIZE is only valid for regular files12741274- * 3. FS_XFLAG_EXTSZINHERIT is only valid for directories.12751275- * 4. can only be changed on regular files if no extents are allocated12761276- * 5. can be changed on directories at any time12771277- * 6. extsize hint of 0 turns off hints, clears inode flags.12781278- * 7. Extent size must be a multiple of the appropriate block size.12791279- * 8. for non-realtime files, the extent size hint must be limited12801280- * to half the AG size to avoid alignment extending the extent beyond the12811281- * limits of the AG.12821282- *12831283- * Please keep this function in sync with xfs_scrub_inode_extsize.12701270+ * Validate a proposed extent size hint. For regular files, the hint can only12711271+ * be changed if no extents are allocated.12841272 */12851273static int12861274xfs_ioctl_setattr_check_extsize(···12761288 struct fileattr *fa)12771289{12781290 struct xfs_mount *mp = ip->i_mount;12791279- xfs_extlen_t size;12801280- xfs_fsblock_t extsize_fsb;12911291+ xfs_failaddr_t failaddr;12921292+ uint16_t new_diflags;1281129312821294 if (!fa->fsx_valid)12831295 return 0;1284129612851297 if (S_ISREG(VFS_I(ip)->i_mode) && ip->i_df.if_nextents &&12861286- ((ip->i_extsize << mp->m_sb.sb_blocklog) != fa->fsx_extsize))12981298+ XFS_FSB_TO_B(mp, ip->i_extsize) != fa->fsx_extsize)12871299 return -EINVAL;1288130012891289- if (fa->fsx_extsize == 0)12901290- return 0;12911291-12921292- extsize_fsb = XFS_B_TO_FSB(mp, fa->fsx_extsize);12931293- if (extsize_fsb > MAXEXTLEN)13011301+ if (fa->fsx_extsize & mp->m_blockmask)12941302 return -EINVAL;1295130312961296- if (XFS_IS_REALTIME_INODE(ip) ||12971297- (fa->fsx_xflags & FS_XFLAG_REALTIME)) {12981298- size = mp->m_sb.sb_rextsize << mp->m_sb.sb_blocklog;12991299- } else {13001300- size = mp->m_sb.sb_blocksize;13011301- if (extsize_fsb > mp->m_sb.sb_agblocks / 2)13041304+ new_diflags = xfs_flags2diflags(ip, fa->fsx_xflags);13051305+13061306+ /*13071307+ * Inode verifiers on older kernels don't check that the extent size13081308+ * hint is an integer multiple of the rt extent size on a directory13091309+ * with both rtinherit and extszinherit flags set. Don't let sysadmins13101310+ * misconfigure directories.13111311+ */13121312+ if ((new_diflags & XFS_DIFLAG_RTINHERIT) &&13131313+ (new_diflags & XFS_DIFLAG_EXTSZINHERIT)) {13141314+ unsigned int rtextsize_bytes;13151315+13161316+ rtextsize_bytes = XFS_FSB_TO_B(mp, mp->m_sb.sb_rextsize);13171317+ if (fa->fsx_extsize % rtextsize_bytes)13021318 return -EINVAL;13031319 }1304132013051305- if (fa->fsx_extsize % size)13061306- return -EINVAL;13071307-13081308- return 0;13211321+ failaddr = xfs_inode_validate_extsize(ip->i_mount,13221322+ XFS_B_TO_FSB(mp, fa->fsx_extsize),13231323+ VFS_I(ip)->i_mode, new_diflags);13241324+ return failaddr != NULL ? -EINVAL : 0;13091325}1310132613111311-/*13121312- * CoW extent size hint validation rules are:13131313- *13141314- * 1. CoW extent size hint can only be set if reflink is enabled on the fs.13151315- * The inode does not have to have any shared blocks, but it must be a v3.13161316- * 2. FS_XFLAG_COWEXTSIZE is only valid for directories and regular files;13171317- * for a directory, the hint is propagated to new files.13181318- * 3. Can be changed on files & directories at any time.13191319- * 4. CoW extsize hint of 0 turns off hints, clears inode flags.13201320- * 5. Extent size must be a multiple of the appropriate block size.13211321- * 6. The extent size hint must be limited to half the AG size to avoid13221322- * alignment extending the extent beyond the limits of the AG.13231323- *13241324- * Please keep this function in sync with xfs_scrub_inode_cowextsize.13251325- */13261327static int13271328xfs_ioctl_setattr_check_cowextsize(13281329 struct xfs_inode *ip,13291330 struct fileattr *fa)13301331{13311332 struct xfs_mount *mp = ip->i_mount;13321332- xfs_extlen_t size;13331333- xfs_fsblock_t cowextsize_fsb;13331333+ xfs_failaddr_t failaddr;13341334+ uint64_t new_diflags2;13351335+ uint16_t new_diflags;1334133613351337 if (!fa->fsx_valid)13361338 return 0;1337133913381338- if (!(fa->fsx_xflags & FS_XFLAG_COWEXTSIZE))13391339- return 0;13401340-13411341- if (!xfs_sb_version_hasreflink(&ip->i_mount->m_sb))13401340+ if (fa->fsx_cowextsize & mp->m_blockmask)13421341 return -EINVAL;1343134213441344- if (fa->fsx_cowextsize == 0)13451345- return 0;13431343+ new_diflags = xfs_flags2diflags(ip, fa->fsx_xflags);13441344+ new_diflags2 = xfs_flags2diflags2(ip, fa->fsx_xflags);1346134513471347- cowextsize_fsb = XFS_B_TO_FSB(mp, fa->fsx_cowextsize);13481348- if (cowextsize_fsb > MAXEXTLEN)13491349- return -EINVAL;13501350-13511351- size = mp->m_sb.sb_blocksize;13521352- if (cowextsize_fsb > mp->m_sb.sb_agblocks / 2)13531353- return -EINVAL;13541354-13551355- if (fa->fsx_cowextsize % size)13561356- return -EINVAL;13571357-13581358- return 0;13461346+ failaddr = xfs_inode_validate_cowextsize(ip->i_mount,13471347+ XFS_B_TO_FSB(mp, fa->fsx_cowextsize),13481348+ VFS_I(ip)->i_mode, new_diflags, new_diflags2);13491349+ return failaddr != NULL ? -EINVAL : 0;13591350}1360135113611352static int
···570570 * @flags: Link flags.571571 * @rpm_active: Whether or not the consumer device is runtime-PM-active.572572 * @kref: Count repeated addition of the same link.573573- * @rcu_head: An RCU head to use for deferred execution of SRCU callbacks.573573+ * @rm_work: Work structure used for removing the link.574574 * @supplier_preactivated: Supplier has been made active before consumer probe.575575 */576576struct device_link {···583583 u32 flags;584584 refcount_t rpm_active;585585 struct kref kref;586586-#ifdef CONFIG_SRCU587587- struct rcu_head rcu_head;588588-#endif586586+ struct work_struct rm_work;589587 bool supplier_preactivated; /* Owned by consumer probe. */590588};591589
···496496 * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY497497 * @state: State of the PHY for management purposes498498 * @dev_flags: Device-specific flags used by the PHY driver.499499+ * Bits [15:0] are free to use by the PHY driver to communicate500500+ * driver specific behavior.501501+ * Bits [23:16] are currently reserved for future use.502502+ * Bits [31:24] are reserved for defining generic503503+ * PHY driver behavior.499504 * @irq: IRQ number of the PHY's interrupt (-1 if none)500505 * @phy_timer: The timer for handling the state machine501506 * @phylink: Pointer to phylink instance for this PHY
···128128static inline void qdisc_run(struct Qdisc *q)129129{130130 if (qdisc_run_begin(q)) {131131- /* NOLOCK qdisc must check 'state' under the qdisc seqlock132132- * to avoid racing with dev_qdisc_reset()133133- */134134- if (!(q->flags & TCQ_F_NOLOCK) ||135135- likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))136136- __qdisc_run(q);131131+ __qdisc_run(q);137132 qdisc_run_end(q);138133 }139134}
+34-1
include/net/sch_generic.h
···3636enum qdisc_state_t {3737 __QDISC_STATE_SCHED,3838 __QDISC_STATE_DEACTIVATED,3939+ __QDISC_STATE_MISSED,3940};40414142struct qdisc_size_table {···160159static inline bool qdisc_run_begin(struct Qdisc *qdisc)161160{162161 if (qdisc->flags & TCQ_F_NOLOCK) {162162+ if (spin_trylock(&qdisc->seqlock))163163+ goto nolock_empty;164164+165165+ /* If the MISSED flag is set, it means other thread has166166+ * set the MISSED flag before second spin_trylock(), so167167+ * we can return false here to avoid multi cpus doing168168+ * the set_bit() and second spin_trylock() concurrently.169169+ */170170+ if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))171171+ return false;172172+173173+ /* Set the MISSED flag before the second spin_trylock(),174174+ * if the second spin_trylock() return false, it means175175+ * other cpu holding the lock will do dequeuing for us176176+ * or it will see the MISSED flag set after releasing177177+ * lock and reschedule the net_tx_action() to do the178178+ * dequeuing.179179+ */180180+ set_bit(__QDISC_STATE_MISSED, &qdisc->state);181181+182182+ /* Retry again in case other CPU may not see the new flag183183+ * after it releases the lock at the end of qdisc_run_end().184184+ */163185 if (!spin_trylock(&qdisc->seqlock))164186 return false;187187+188188+nolock_empty:165189 WRITE_ONCE(qdisc->empty, false);166190 } else if (qdisc_is_running(qdisc)) {167191 return false;···202176static inline void qdisc_run_end(struct Qdisc *qdisc)203177{204178 write_seqcount_end(&qdisc->running);205205- if (qdisc->flags & TCQ_F_NOLOCK)179179+ if (qdisc->flags & TCQ_F_NOLOCK) {206180 spin_unlock(&qdisc->seqlock);181181+182182+ if (unlikely(test_bit(__QDISC_STATE_MISSED,183183+ &qdisc->state))) {184184+ clear_bit(__QDISC_STATE_MISSED, &qdisc->state);185185+ __netif_schedule(qdisc);186186+ }187187+ }207188}208189209190static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
···88 * Note: you must update KVM_API_VERSION if you change this interface.99 */10101111+#include <linux/const.h>1112#include <linux/types.h>1213#include <linux/compiler.h>1314#include <linux/ioctl.h>···18801879 * conversion after harvesting an entry. Also, it must not skip any18811880 * dirty bits, so that dirty bits are always harvested in sequence.18821881 */18831883-#define KVM_DIRTY_GFN_F_DIRTY BIT(0)18841884-#define KVM_DIRTY_GFN_F_RESET BIT(1)18821882+#define KVM_DIRTY_GFN_F_DIRTY _BITUL(0)18831883+#define KVM_DIRTY_GFN_F_RESET _BITUL(1)18851884#define KVM_DIRTY_GFN_F_MASK 0x31886188518871886/*
+1-40
init/Kconfig
···442442443443source "kernel/irq/Kconfig"444444source "kernel/time/Kconfig"445445+source "kernel/bpf/Kconfig"445446source "kernel/Kconfig.preempt"446447447448menu "CPU/Task time and stats accounting"···17131712# end of the "standard kernel features (expert users)" menu1714171317151714# syscall, maps, verifier17161716-17171717-config BPF_LSM17181718- bool "LSM Instrumentation with BPF"17191719- depends on BPF_EVENTS17201720- depends on BPF_SYSCALL17211721- depends on SECURITY17221722- depends on BPF_JIT17231723- help17241724- Enables instrumentation of the security hooks with eBPF programs for17251725- implementing dynamic MAC and Audit Policies.17261726-17271727- If you are unsure how to answer this question, answer N.17281728-17291729-config BPF_SYSCALL17301730- bool "Enable bpf() system call"17311731- select BPF17321732- select IRQ_WORK17331733- select TASKS_TRACE_RCU17341734- select BINARY_PRINTF17351735- select NET_SOCK_MSG if INET17361736- default n17371737- help17381738- Enable the bpf() system call that allows to manipulate eBPF17391739- programs and maps via file descriptors.17401740-17411741-config ARCH_WANT_DEFAULT_BPF_JIT17421742- bool17431743-17441744-config BPF_JIT_ALWAYS_ON17451745- bool "Permanently enable BPF JIT and remove BPF interpreter"17461746- depends on BPF_SYSCALL && HAVE_EBPF_JIT && BPF_JIT17471747- help17481748- Enables BPF JIT and removes BPF interpreter to avoid17491749- speculative execution of BPF instructions by the interpreter17501750-17511751-config BPF_JIT_DEFAULT_ON17521752- def_bool ARCH_WANT_DEFAULT_BPF_JIT || BPF_JIT_ALWAYS_ON17531753- depends on HAVE_EBPF_JIT && BPF_JIT17541754-17551755-source "kernel/bpf/preload/Kconfig"1756171517571716config USERFAULTFD17581717 bool "Enable userfaultfd() system call"
+89
kernel/bpf/Kconfig
···11+# SPDX-License-Identifier: GPL-2.0-only22+33+# BPF interpreter that, for example, classic socket filters depend on.44+config BPF55+ bool66+77+# Used by archs to tell that they support BPF JIT compiler plus which88+# flavour. Only one of the two can be selected for a specific arch since99+# eBPF JIT supersedes the cBPF JIT.1010+1111+# Classic BPF JIT (cBPF)1212+config HAVE_CBPF_JIT1313+ bool1414+1515+# Extended BPF JIT (eBPF)1616+config HAVE_EBPF_JIT1717+ bool1818+1919+# Used by archs to tell that they want the BPF JIT compiler enabled by2020+# default for kernels that were compiled with BPF JIT support.2121+config ARCH_WANT_DEFAULT_BPF_JIT2222+ bool2323+2424+menu "BPF subsystem"2525+2626+config BPF_SYSCALL2727+ bool "Enable bpf() system call"2828+ select BPF2929+ select IRQ_WORK3030+ select TASKS_TRACE_RCU3131+ select BINARY_PRINTF3232+ select NET_SOCK_MSG if INET3333+ default n3434+ help3535+ Enable the bpf() system call that allows to manipulate BPF programs3636+ and maps via file descriptors.3737+3838+config BPF_JIT3939+ bool "Enable BPF Just In Time compiler"4040+ depends on BPF4141+ depends on HAVE_CBPF_JIT || HAVE_EBPF_JIT4242+ depends on MODULES4343+ help4444+ BPF programs are normally handled by a BPF interpreter. This option4545+ allows the kernel to generate native code when a program is loaded4646+ into the kernel. This will significantly speed-up processing of BPF4747+ programs.4848+4949+ Note, an admin should enable this feature changing:5050+ /proc/sys/net/core/bpf_jit_enable5151+ /proc/sys/net/core/bpf_jit_harden (optional)5252+ /proc/sys/net/core/bpf_jit_kallsyms (optional)5353+5454+config BPF_JIT_ALWAYS_ON5555+ bool "Permanently enable BPF JIT and remove BPF interpreter"5656+ depends on BPF_SYSCALL && HAVE_EBPF_JIT && BPF_JIT5757+ help5858+ Enables BPF JIT and removes BPF interpreter to avoid speculative5959+ execution of BPF instructions by the interpreter.6060+6161+config BPF_JIT_DEFAULT_ON6262+ def_bool ARCH_WANT_DEFAULT_BPF_JIT || BPF_JIT_ALWAYS_ON6363+ depends on HAVE_EBPF_JIT && BPF_JIT6464+6565+config BPF_UNPRIV_DEFAULT_OFF6666+ bool "Disable unprivileged BPF by default"6767+ depends on BPF_SYSCALL6868+ help6969+ Disables unprivileged BPF by default by setting the corresponding7070+ /proc/sys/kernel/unprivileged_bpf_disabled knob to 2. An admin can7171+ still reenable it by setting it to 0 later on, or permanently7272+ disable it by setting it to 1 (from which no other transition to7373+ 0 is possible anymore).7474+7575+source "kernel/bpf/preload/Kconfig"7676+7777+config BPF_LSM7878+ bool "Enable BPF LSM Instrumentation"7979+ depends on BPF_EVENTS8080+ depends on BPF_SYSCALL8181+ depends on SECURITY8282+ depends on BPF_JIT8383+ help8484+ Enables instrumentation of the security hooks with BPF programs for8585+ implementing dynamic MAC and Audit Policies.8686+8787+ If you are unsure how to answer this question, answer N.8888+8989+endmenu # "BPF subsystem"
+2
kernel/bpf/bpf_lsm.c
···107107 return &bpf_inode_storage_get_proto;108108 case BPF_FUNC_inode_storage_delete:109109 return &bpf_inode_storage_delete_proto;110110+#ifdef CONFIG_NET110111 case BPF_FUNC_sk_storage_get:111112 return &bpf_sk_storage_get_proto;112113 case BPF_FUNC_sk_storage_delete:113114 return &bpf_sk_storage_delete_proto;115115+#endif /* CONFIG_NET */114116 case BPF_FUNC_spin_lock:115117 return &bpf_spin_lock_proto;116118 case BPF_FUNC_spin_unlock:
+12
kernel/bpf/btf.c
···52065206 m->ret_size = ret;5207520752085208 for (i = 0; i < nargs; i++) {52095209+ if (i == nargs - 1 && args[i].type == 0) {52105210+ bpf_log(log,52115211+ "The function %s with variable args is unsupported.\n",52125212+ tname);52135213+ return -EINVAL;52145214+ }52095215 ret = __get_type_size(btf, args[i].type, &t);52105216 if (ret < 0) {52115217 bpf_log(log,52125218 "The function %s arg%d type %s is unsupported.\n",52135219 tname, i, btf_kind_str[BTF_INFO_KIND(t->info)]);52205220+ return -EINVAL;52215221+ }52225222+ if (ret == 0) {52235223+ bpf_log(log,52245224+ "The function %s has malformed void argument.\n",52255225+ tname);52145226 return -EINVAL;52155227 }52165228 m->arg_size[i] = ret;
+19-16
kernel/bpf/helpers.c
···692692 return -EINVAL;693693}694694695695-/* Per-cpu temp buffers which can be used by printf-like helpers for %s or %p695695+/* Per-cpu temp buffers used by printf-like helpers to store the bprintf binary696696+ * arguments representation.696697 */697697-#define MAX_PRINTF_BUF_LEN 512698698+#define MAX_BPRINTF_BUF_LEN 512698699699699-struct bpf_printf_buf {700700- char tmp_buf[MAX_PRINTF_BUF_LEN];700700+/* Support executing three nested bprintf helper calls on a given CPU */701701+#define MAX_BPRINTF_NEST_LEVEL 3702702+struct bpf_bprintf_buffers {703703+ char tmp_bufs[MAX_BPRINTF_NEST_LEVEL][MAX_BPRINTF_BUF_LEN];701704};702702-static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);703703-static DEFINE_PER_CPU(int, bpf_printf_buf_used);705705+static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);706706+static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);704707705708static int try_get_fmt_tmp_buf(char **tmp_buf)706709{707707- struct bpf_printf_buf *bufs;708708- int used;710710+ struct bpf_bprintf_buffers *bufs;711711+ int nest_level;709712710713 preempt_disable();711711- used = this_cpu_inc_return(bpf_printf_buf_used);712712- if (WARN_ON_ONCE(used > 1)) {713713- this_cpu_dec(bpf_printf_buf_used);714714+ nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);715715+ if (WARN_ON_ONCE(nest_level > MAX_BPRINTF_NEST_LEVEL)) {716716+ this_cpu_dec(bpf_bprintf_nest_level);714717 preempt_enable();715718 return -EBUSY;716719 }717717- bufs = this_cpu_ptr(&bpf_printf_buf);718718- *tmp_buf = bufs->tmp_buf;720720+ bufs = this_cpu_ptr(&bpf_bprintf_bufs);721721+ *tmp_buf = bufs->tmp_bufs[nest_level - 1];719722720723 return 0;721724}722725723726void bpf_bprintf_cleanup(void)724727{725725- if (this_cpu_read(bpf_printf_buf_used)) {726726- this_cpu_dec(bpf_printf_buf_used);728728+ if (this_cpu_read(bpf_bprintf_nest_level)) {729729+ this_cpu_dec(bpf_bprintf_nest_level);727730 preempt_enable();728731 }729732}···763760 if (num_args && try_get_fmt_tmp_buf(&tmp_buf))764761 return -EBUSY;765762766766- tmp_buf_end = tmp_buf + MAX_PRINTF_BUF_LEN;763763+ tmp_buf_end = tmp_buf + MAX_BPRINTF_BUF_LEN;767764 *bin_args = (u32 *)tmp_buf;768765 }769766
···64096409};6410641064116411static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,64126412- const struct bpf_reg_state *off_reg,64136413- u32 *alu_limit, u8 opcode)64126412+ u32 *alu_limit, bool mask_to_left)64146413{64156415- bool off_is_neg = off_reg->smin_value < 0;64166416- bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||64176417- (opcode == BPF_SUB && !off_is_neg);64186414 u32 max = 0, ptr_limit = 0;64196419-64206420- if (!tnum_is_const(off_reg->var_off) &&64216421- (off_reg->smin_value < 0) != (off_reg->smax_value < 0))64226422- return REASON_BOUNDS;6423641564246416 switch (ptr_reg->type) {64256417 case PTR_TO_STACK:···64786486 return opcode == BPF_ADD || opcode == BPF_SUB;64796487}6480648864896489+struct bpf_sanitize_info {64906490+ struct bpf_insn_aux_data aux;64916491+ bool mask_to_left;64926492+};64936493+64816494static int sanitize_ptr_alu(struct bpf_verifier_env *env,64826495 struct bpf_insn *insn,64836496 const struct bpf_reg_state *ptr_reg,64846497 const struct bpf_reg_state *off_reg,64856498 struct bpf_reg_state *dst_reg,64866486- struct bpf_insn_aux_data *tmp_aux,64996499+ struct bpf_sanitize_info *info,64876500 const bool commit_window)64886501{64896489- struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;65026502+ struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : &info->aux;64906503 struct bpf_verifier_state *vstate = env->cur_state;64916504 bool off_is_imm = tnum_is_const(off_reg->var_off);64926505 bool off_is_neg = off_reg->smin_value < 0;···65126515 if (vstate->speculative)65136516 goto do_sim;6514651765156515- err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode);65186518+ if (!commit_window) {65196519+ if (!tnum_is_const(off_reg->var_off) &&65206520+ (off_reg->smin_value < 0) != (off_reg->smax_value < 0))65216521+ return REASON_BOUNDS;65226522+65236523+ info->mask_to_left = (opcode == BPF_ADD && off_is_neg) ||65246524+ (opcode == BPF_SUB && !off_is_neg);65256525+ }65266526+65276527+ err = retrieve_ptr_limit(ptr_reg, &alu_limit, info->mask_to_left);65166528 if (err < 0)65176529 return err;65186530···65296523 /* In commit phase we narrow the masking window based on65306524 * the observed pointer move after the simulated operation.65316525 */65326532- alu_state = tmp_aux->alu_state;65336533- alu_limit = abs(tmp_aux->alu_limit - alu_limit);65266526+ alu_state = info->aux.alu_state;65276527+ alu_limit = abs(info->aux.alu_limit - alu_limit);65346528 } else {65356529 alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;65366530 alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;···65456539 /* If we're in commit phase, we're done here given we already65466540 * pushed the truncated dst_reg into the speculative verification65476541 * stack.65426542+ *65436543+ * Also, when register is a known constant, we rewrite register-based65446544+ * operation to immediate-based, and thus do not need masking (and as65456545+ * a consequence, do not need to simulate the zero-truncation either).65486546 */65496549- if (commit_window)65476547+ if (commit_window || off_is_imm)65506548 return 0;6551654965526550 /* Simulate and find potential out-of-bounds access under···66956685 smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;66966686 u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,66976687 umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;66986698- struct bpf_insn_aux_data tmp_aux = {};66886688+ struct bpf_sanitize_info info = {};66996689 u8 opcode = BPF_OP(insn->code);67006690 u32 dst = insn->dst_reg;67016691 int ret;···6764675467656755 if (sanitize_needed(opcode)) {67666756 ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,67676767- &tmp_aux, false);67576757+ &info, false);67686758 if (ret < 0)67696759 return sanitize_err(env, insn, ret, off_reg, dst_reg);67706760 }···69056895 return -EACCES;69066896 if (sanitize_needed(opcode)) {69076897 ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,69086908- &tmp_aux, true);68986898+ &info, true);69096899 if (ret < 0)69106900 return sanitize_err(env, insn, ret, off_reg, dst_reg);69116901 }···70947084 s32 smin_val = src_reg->s32_min_value;70957085 u32 umax_val = src_reg->u32_max_value;7096708670977097- /* Assuming scalar64_min_max_and will be called so its safe70987098- * to skip updating register for known 32-bit case.70997099- */71007100- if (src_known && dst_known)70877087+ if (src_known && dst_known) {70887088+ __mark_reg32_known(dst_reg, var32_off.value);71017089 return;70907090+ }7102709171037092 /* We get our minimum from the var_off, since that's inherently71047093 * bitwise. Our maximum is the minimum of the operands' maxima.···71177108 dst_reg->s32_min_value = dst_reg->u32_min_value;71187109 dst_reg->s32_max_value = dst_reg->u32_max_value;71197110 }71207120-71217111}7122711271237113static void scalar_min_max_and(struct bpf_reg_state *dst_reg,···71637155 s32 smin_val = src_reg->s32_min_value;71647156 u32 umin_val = src_reg->u32_min_value;7165715771667166- /* Assuming scalar64_min_max_or will be called so it is safe71677167- * to skip updating register for known case.71687168- */71697169- if (src_known && dst_known)71587158+ if (src_known && dst_known) {71597159+ __mark_reg32_known(dst_reg, var32_off.value);71707160 return;71617161+ }7171716271727163 /* We get our maximum from the var_off, and our minimum is the71737164 * maximum of the operands' minima···72317224 struct tnum var32_off = tnum_subreg(dst_reg->var_off);72327225 s32 smin_val = src_reg->s32_min_value;7233722672347234- /* Assuming scalar64_min_max_xor will be called so it is safe72357235- * to skip updating register for known case.72367236- */72377237- if (src_known && dst_known)72277227+ if (src_known && dst_known) {72287228+ __mark_reg32_known(dst_reg, var32_off.value);72387229 return;72307230+ }7239723172407232 /* We get both minimum and maximum from the var32_off. */72417233 dst_reg->u32_min_value = var32_off.value;···1320613200 return 0;1320713201}13208132021320313203+BTF_SET_START(btf_id_deny)1320413204+BTF_ID_UNUSED1320513205+#ifdef CONFIG_SMP1320613206+BTF_ID(func, migrate_disable)1320713207+BTF_ID(func, migrate_enable)1320813208+#endif1320913209+#if !defined CONFIG_PREEMPT_RCU && !defined CONFIG_TINY_RCU1321013210+BTF_ID(func, rcu_read_unlock_strict)1321113211+#endif1321213212+BTF_SET_END(btf_id_deny)1321313213+1320913214static int check_attach_btf_id(struct bpf_verifier_env *env)1321013215{1321113216 struct bpf_prog *prog = env->prog;···1327613259 ret = bpf_lsm_verify_prog(&env->log, prog);1327713260 if (ret < 0)1327813261 return ret;1326213262+ } else if (prog->type == BPF_PROG_TYPE_TRACING &&1326313263+ btf_id_set_contains(&btf_id_deny, btf_id)) {1326413264+ return -EINVAL;1327913265 }13280132661328113267 key = bpf_trampoline_compute_key(tgt_prog, prog->aux->attach_btf, btf_id);···1337813358 if (is_priv)1337913359 env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ;13380133601338113381- if (bpf_prog_is_dev_bound(env->prog->aux)) {1338213382- ret = bpf_prog_offload_verifier_prep(env->prog);1338313383- if (ret)1338413384- goto skip_full_check;1338513385- }1338613386-1338713361 env->explored_states = kvcalloc(state_htab_size(env),1338813362 sizeof(struct bpf_verifier_state_list *),1338913363 GFP_USER);···1340413390 ret = resolve_pseudo_ldimm64(env);1340513391 if (ret < 0)1340613392 goto skip_full_check;1339313393+1339413394+ if (bpf_prog_is_dev_bound(env->prog->aux)) {1339513395+ ret = bpf_prog_offload_verifier_prep(env->prog);1339613396+ if (ret)1339713397+ goto skip_full_check;1339813398+ }13407133991340813400 ret = check_cfg(env);1340913401 if (ret < 0)
+16-14
kernel/seccomp.c
···1105110511061106 up(&match->notif->request);11071107 wake_up_poll(&match->wqh, EPOLLIN | EPOLLRDNORM);11081108- mutex_unlock(&match->notify_lock);1109110811101109 /*11111110 * This is where we wait for a reply from userspace.11121111 */11131113-wait:11141114- err = wait_for_completion_interruptible(&n.ready);11151115- mutex_lock(&match->notify_lock);11161116- if (err == 0) {11171117- /* Check if we were woken up by a addfd message */11121112+ do {11131113+ mutex_unlock(&match->notify_lock);11141114+ err = wait_for_completion_interruptible(&n.ready);11151115+ mutex_lock(&match->notify_lock);11161116+ if (err != 0)11171117+ goto interrupted;11181118+11181119 addfd = list_first_entry_or_null(&n.addfd,11191120 struct seccomp_kaddfd, list);11201120- if (addfd && n.state != SECCOMP_NOTIFY_REPLIED) {11211121+ /* Check if we were woken up by a addfd message */11221122+ if (addfd)11211123 seccomp_handle_addfd(addfd);11221122- mutex_unlock(&match->notify_lock);11231123- goto wait;11241124- }11251125- ret = n.val;11261126- err = n.error;11271127- flags = n.flags;11281128- }1129112411251125+ } while (n.state != SECCOMP_NOTIFY_REPLIED);11261126+11271127+ ret = n.val;11281128+ err = n.error;11291129+ flags = n.flags;11301130+11311131+interrupted:11301132 /* If there were any pending addfd calls, clear them out */11311133 list_for_each_entry_safe(addfd, tmp, &n.addfd, list) {11321134 /* The process went away before we got a chance to handle it */
···275275 wait_event_lock_irq(percpu_ref_switch_waitq, !data->confirm_switch,276276 percpu_ref_switch_lock);277277278278- if (data->force_atomic || (ref->percpu_count_ptr & __PERCPU_REF_DEAD))278278+ if (data->force_atomic || percpu_ref_is_dying(ref))279279 __percpu_ref_switch_to_atomic(ref, confirm_switch);280280 else281281 __percpu_ref_switch_to_percpu(ref);···385385386386 spin_lock_irqsave(&percpu_ref_switch_lock, flags);387387388388- WARN_ONCE(ref->percpu_count_ptr & __PERCPU_REF_DEAD,388388+ WARN_ONCE(percpu_ref_is_dying(ref),389389 "%s called more than once on %ps!", __func__,390390 ref->data->release);391391···465465466466 spin_lock_irqsave(&percpu_ref_switch_lock, flags);467467468468- WARN_ON_ONCE(!(ref->percpu_count_ptr & __PERCPU_REF_DEAD));468468+ WARN_ON_ONCE(!percpu_ref_is_dying(ref));469469 WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));470470471471 ref->percpu_count_ptr &= ~__PERCPU_REF_DEAD;
-27
net/Kconfig
···302302 select DQL303303 default y304304305305-config BPF_JIT306306- bool "enable BPF Just In Time compiler"307307- depends on HAVE_CBPF_JIT || HAVE_EBPF_JIT308308- depends on MODULES309309- help310310- Berkeley Packet Filter filtering capabilities are normally handled311311- by an interpreter. This option allows kernel to generate a native312312- code when filter is loaded in memory. This should speedup313313- packet sniffing (libpcap/tcpdump).314314-315315- Note, admin should enable this feature changing:316316- /proc/sys/net/core/bpf_jit_enable317317- /proc/sys/net/core/bpf_jit_harden (optional)318318- /proc/sys/net/core/bpf_jit_kallsyms (optional)319319-320305config BPF_STREAM_PARSER321306 bool "enable BPF STREAM_PARSER"322307 depends on INET···455470 e.g. notification messages.456471457472endif # if NET458458-459459-# Used by archs to tell that they support BPF JIT compiler plus which flavour.460460-# Only one of the two can be selected for a specific arch since eBPF JIT supersedes461461-# the cBPF JIT.462462-463463-# Classic BPF JIT (cBPF)464464-config HAVE_CBPF_JIT465465- bool466466-467467-# Extended BPF JIT (eBPF)468468-config HAVE_EBPF_JIT469469- bool
+34-17
net/can/isotp.c
···10621062 if (len < ISOTP_MIN_NAMELEN)10631063 return -EINVAL;1064106410651065- /* do not register frame reception for functional addressing */10661066- if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)10671067- do_rx_reg = 0;10681068-10691069- /* do not validate rx address for functional addressing */10701070- if (do_rx_reg) {10711071- if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id)10721072- return -EADDRNOTAVAIL;10731073-10741074- if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))10751075- return -EADDRNOTAVAIL;10761076- }10771077-10781065 if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))10791066 return -EADDRNOTAVAIL;10801067···10691082 return -ENODEV;1070108310711084 lock_sock(sk);10851085+10861086+ /* do not register frame reception for functional addressing */10871087+ if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)10881088+ do_rx_reg = 0;10891089+10901090+ /* do not validate rx address for functional addressing */10911091+ if (do_rx_reg) {10921092+ if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) {10931093+ err = -EADDRNOTAVAIL;10941094+ goto out;10951095+ }10961096+10971097+ if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) {10981098+ err = -EADDRNOTAVAIL;10991099+ goto out;11001100+ }11011101+ }1072110210731103 if (so->bound && addr->can_ifindex == so->ifindex &&10741104 addr->can_addr.tp.rx_id == so->rxid &&···11681164 return ISOTP_MIN_NAMELEN;11691165}1170116611711171-static int isotp_setsockopt(struct socket *sock, int level, int optname,11671167+static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,11721168 sockptr_t optval, unsigned int optlen)11731169{11741170 struct sock *sk = sock->sk;11751171 struct isotp_sock *so = isotp_sk(sk);11761172 int ret = 0;11771177-11781178- if (level != SOL_CAN_ISOTP)11791179- return -EINVAL;1180117311811174 if (so->bound)11821175 return -EISCONN;···12461245 ret = -ENOPROTOOPT;12471246 }1248124712481248+ return ret;12491249+}12501250+12511251+static int isotp_setsockopt(struct socket *sock, int level, int optname,12521252+ sockptr_t optval, unsigned int optlen)12531253+12541254+{12551255+ struct sock *sk = sock->sk;12561256+ int ret;12571257+12581258+ if (level != SOL_CAN_ISOTP)12591259+ return -EINVAL;12601260+12611261+ lock_sock(sk);12621262+ ret = isotp_setsockopt_locked(sock, level, optname, optval, optlen);12631263+ release_sock(sk);12491264 return ret;12501265}12511266
+24-5
net/core/dev.c
···3853385338543854 if (q->flags & TCQ_F_NOLOCK) {38553855 rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;38563856- qdisc_run(q);38563856+ if (likely(!netif_xmit_frozen_or_stopped(txq)))38573857+ qdisc_run(q);3857385838583859 if (unlikely(to_free))38593860 kfree_skb_list(to_free);···50265025 sd->output_queue_tailp = &sd->output_queue;50275026 local_irq_enable();5028502750285028+ rcu_read_lock();50295029+50295030 while (head) {50305031 struct Qdisc *q = head;50315032 spinlock_t *root_lock = NULL;5032503350335034 head = head->next_sched;5034503550355035- if (!(q->flags & TCQ_F_NOLOCK)) {50365036- root_lock = qdisc_lock(q);50375037- spin_lock(root_lock);50385038- }50395036 /* We need to make sure head->next_sched is read50405037 * before clearing __QDISC_STATE_SCHED50415038 */50425039 smp_mb__before_atomic();50405040+50415041+ if (!(q->flags & TCQ_F_NOLOCK)) {50425042+ root_lock = qdisc_lock(q);50435043+ spin_lock(root_lock);50445044+ } else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,50455045+ &q->state))) {50465046+ /* There is a synchronize_net() between50475047+ * STATE_DEACTIVATED flag being set and50485048+ * qdisc_reset()/some_qdisc_is_busy() in50495049+ * dev_deactivate(), so we can safely bail out50505050+ * early here to avoid data race between50515051+ * qdisc_deactivate() and some_qdisc_is_busy()50525052+ * for lockless qdisc.50535053+ */50545054+ clear_bit(__QDISC_STATE_SCHED, &q->state);50555055+ continue;50565056+ }50575057+50435058 clear_bit(__QDISC_STATE_SCHED, &q->state);50445059 qdisc_run(q);50455060 if (root_lock)50465061 spin_unlock(root_lock);50475062 }50635063+50645064+ rcu_read_unlock();50485065 }5049506650505067 xfrm_dev_backlog(sd);
···21322132 if (skb_is_tcp_pure_ack(skb))21332133 return;2134213421352135- if (can_skb_orphan_partial(skb))21362136- skb_set_owner_sk_safe(skb, skb->sk);21372137- else21382138- skb_orphan(skb);21352135+ if (can_skb_orphan_partial(skb) && skb_set_owner_sk_safe(skb, skb->sk))21362136+ return;21372137+21382138+ skb_orphan(skb);21392139}21402140EXPORT_SYMBOL(skb_orphan_partial);21412141
+3-2
net/dsa/master.c
···147147 struct dsa_switch *ds = cpu_dp->ds;148148 int port = cpu_dp->index;149149 int len = ETH_GSTRING_LEN;150150- int mcount = 0, count;151151- unsigned int i;150150+ int mcount = 0, count, i;152151 uint8_t pfx[4];153152 uint8_t *ndata;154153···177178 */178179 ds->ops->get_strings(ds, port, stringset, ndata);179180 count = ds->ops->get_sset_count(ds, port, stringset);181181+ if (count < 0)182182+ return;180183 for (i = 0; i < count; i++) {181184 memmove(ndata + (i * len + sizeof(pfx)),182185 ndata + i * len, len - sizeof(pfx));
+7-5
net/dsa/slave.c
···776776 struct dsa_switch *ds = dp->ds;777777778778 if (sset == ETH_SS_STATS) {779779- int count;779779+ int count = 0;780780781781- count = 4;782782- if (ds->ops->get_sset_count)783783- count += ds->ops->get_sset_count(ds, dp->index, sset);781781+ if (ds->ops->get_sset_count) {782782+ count = ds->ops->get_sset_count(ds, dp->index, sset);783783+ if (count < 0)784784+ return count;785785+ }784786785785- return count;787787+ return count + 4;786788 } else if (sset == ETH_SS_TEST) {787789 return net_selftest_get_count();788790 }
···17451745 IPV6_TLV_PADN, 0 };1746174617471747 /* we assume size > sizeof(ra) here */17481748- /* limit our allocations to order-0 page */17491749- size = min_t(int, size, SKB_MAX_ORDER(0, 0));17501748 skb = sock_alloc_send_skb(sk, size, 1, &err);17511751-17521749 if (!skb)17531750 return NULL;17541751
+3-1
net/ipv6/reassembly.c
···343343 hdr = ipv6_hdr(skb);344344 fhdr = (struct frag_hdr *)skb_transport_header(skb);345345346346- if (!(fhdr->frag_off & htons(0xFFF9))) {346346+ if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) {347347 /* It is not a fragmented frame */348348 skb->transport_header += sizeof(struct frag_hdr);349349 __IP6_INC_STATS(net,···351351352352 IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);353353 IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;354354+ IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) +355355+ sizeof(struct ipv6hdr);354356 return 1;355357 }356358
+13-23
net/mac80211/ieee80211_i.h
···5050#define IEEE80211_ENCRYPT_HEADROOM 85151#define IEEE80211_ENCRYPT_TAILROOM 1852525353-/* IEEE 802.11 (Ch. 9.5 Defragmentation) requires support for concurrent5454- * reception of at least three fragmented frames. This limit can be increased5555- * by changing this define, at the cost of slower frame reassembly and5656- * increased memory use (about 2 kB of RAM per entry). */5757-#define IEEE80211_FRAGMENT_MAX 45858-5953/* power level hasn't been configured (or set to automatic) */6054#define IEEE80211_UNSET_POWER_LEVEL INT_MIN6155···8187#define IEEE80211_DEAUTH_FRAME_LEN (24 /* hdr */ + 2 /* reason */)82888389#define IEEE80211_MAX_NAN_INSTANCE_ID 2558484-8585-struct ieee80211_fragment_entry {8686- struct sk_buff_head skb_list;8787- unsigned long first_frag_time;8888- u16 seq;8989- u16 extra_len;9090- u16 last_frag;9191- u8 rx_queue;9292- bool check_sequential_pn; /* needed for CCMP/GCMP */9393- u8 last_pn[6]; /* PN of the last fragment if CCMP was used */9494-};9595-96909791struct ieee80211_bss {9892 u32 device_ts_beacon, device_ts_presp;···223241 */224242 int security_idx;225243226226- u32 tkip_iv32;227227- u16 tkip_iv16;244244+ union {245245+ struct {246246+ u32 iv32;247247+ u16 iv16;248248+ } tkip;249249+ struct {250250+ u8 pn[IEEE80211_CCMP_PN_LEN];251251+ } ccm_gcm;252252+ };228253};229254230255struct ieee80211_csa_settings {···891902892903 char name[IFNAMSIZ];893904894894- /* Fragment table for host-based reassembly */895895- struct ieee80211_fragment_entry fragments[IEEE80211_FRAGMENT_MAX];896896- unsigned int fragment_next;905905+ struct ieee80211_fragment_cache frags;897906898907 /* TID bitmap for NoAck policy */899908 u16 noack_map;···23062319#else23072320#define debug_noinline23082321#endif23222322+23232323+void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache);23242324+void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache);2309232523102326#endif /* IEEE80211_I_H */
+3-8
net/mac80211/iface.c
···88 * Copyright 2008, Johannes Berg <johannes@sipsolutions.net>99 * Copyright 2013-2014 Intel Mobile Communications GmbH1010 * Copyright (c) 2016 Intel Deutschland GmbH1111- * Copyright (C) 2018-2020 Intel Corporation1111+ * Copyright (C) 2018-2021 Intel Corporation1212 */1313#include <linux/slab.h>1414#include <linux/kernel.h>···677677 */678678static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)679679{680680- int i;681681-682680 /* free extra data */683681 ieee80211_free_keys(sdata, false);684682685683 ieee80211_debugfs_remove_netdev(sdata);686684687687- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)688688- __skb_queue_purge(&sdata->fragments[i].skb_list);689689- sdata->fragment_next = 0;685685+ ieee80211_destroy_frag_cache(&sdata->frags);690686691687 if (ieee80211_vif_is_mesh(&sdata->vif))692688 ieee80211_mesh_teardown_sdata(sdata);···19261930 sdata->wdev.wiphy = local->hw.wiphy;19271931 sdata->local = local;1928193219291929- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)19301930- skb_queue_head_init(&sdata->fragments[i].skb_list);19331933+ ieee80211_init_frag_cache(&sdata->frags);1931193419321935 INIT_LIST_HEAD(&sdata->key_list);19331936
+7
net/mac80211/key.c
···799799 struct ieee80211_sub_if_data *sdata,800800 struct sta_info *sta)801801{802802+ static atomic_t key_color = ATOMIC_INIT(0);802803 struct ieee80211_key *old_key;803804 int idx = key->conf.keyidx;804805 bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;···850849 key->local = sdata->local;851850 key->sdata = sdata;852851 key->sta = sta;852852+853853+ /*854854+ * Assign a unique ID to every key so we can easily prevent mixed855855+ * key and fragment cache attacks.856856+ */857857+ key->color = atomic_inc_return(&key_color);853858854859 increment_tailroom_need_count(sdata);855860
+2
net/mac80211/key.h
···128128 } debugfs;129129#endif130130131131+ unsigned int color;132132+131133 /*132134 * key config, must be last because it contains key133135 * material as variable length member
+117-33
net/mac80211/rx.c
···66 * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>77 * Copyright 2013-2014 Intel Mobile Communications GmbH88 * Copyright(c) 2015 - 2017 Intel Deutschland GmbH99- * Copyright (C) 2018-2020 Intel Corporation99+ * Copyright (C) 2018-2021 Intel Corporation1010 */11111212#include <linux/jiffies.h>···21232123 return result;21242124}2125212521262126+void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache)21272127+{21282128+ int i;21292129+21302130+ for (i = 0; i < ARRAY_SIZE(cache->entries); i++)21312131+ skb_queue_head_init(&cache->entries[i].skb_list);21322132+}21332133+21342134+void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache)21352135+{21362136+ int i;21372137+21382138+ for (i = 0; i < ARRAY_SIZE(cache->entries); i++)21392139+ __skb_queue_purge(&cache->entries[i].skb_list);21402140+}21412141+21262142static inline struct ieee80211_fragment_entry *21272127-ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,21432143+ieee80211_reassemble_add(struct ieee80211_fragment_cache *cache,21282144 unsigned int frag, unsigned int seq, int rx_queue,21292145 struct sk_buff **skb)21302146{21312147 struct ieee80211_fragment_entry *entry;2132214821332133- entry = &sdata->fragments[sdata->fragment_next++];21342134- if (sdata->fragment_next >= IEEE80211_FRAGMENT_MAX)21352135- sdata->fragment_next = 0;21492149+ entry = &cache->entries[cache->next++];21502150+ if (cache->next >= IEEE80211_FRAGMENT_MAX)21512151+ cache->next = 0;2136215221372137- if (!skb_queue_empty(&entry->skb_list))21382138- __skb_queue_purge(&entry->skb_list);21532153+ __skb_queue_purge(&entry->skb_list);2139215421402155 __skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */21412156 *skb = NULL;···21652150}2166215121672152static inline struct ieee80211_fragment_entry *21682168-ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,21532153+ieee80211_reassemble_find(struct ieee80211_fragment_cache *cache,21692154 unsigned int frag, unsigned int seq,21702155 int rx_queue, struct ieee80211_hdr *hdr)21712156{21722157 struct ieee80211_fragment_entry *entry;21732158 int i, idx;2174215921752175- idx = sdata->fragment_next;21602160+ idx = cache->next;21762161 for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) {21772162 struct ieee80211_hdr *f_hdr;21782163 struct sk_buff *f_skb;···21812166 if (idx < 0)21822167 idx = IEEE80211_FRAGMENT_MAX - 1;2183216821842184- entry = &sdata->fragments[idx];21692169+ entry = &cache->entries[idx];21852170 if (skb_queue_empty(&entry->skb_list) || entry->seq != seq ||21862171 entry->rx_queue != rx_queue ||21872172 entry->last_frag + 1 != frag)···22092194 return NULL;22102195}2211219621972197+static bool requires_sequential_pn(struct ieee80211_rx_data *rx, __le16 fc)21982198+{21992199+ return rx->key &&22002200+ (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||22012201+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||22022202+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||22032203+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&22042204+ ieee80211_has_protected(fc);22052205+}22062206+22122207static ieee80211_rx_result debug_noinline22132208ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)22142209{22102210+ struct ieee80211_fragment_cache *cache = &rx->sdata->frags;22152211 struct ieee80211_hdr *hdr;22162212 u16 sc;22172213 __le16 fc;22182214 unsigned int frag, seq;22192215 struct ieee80211_fragment_entry *entry;22202216 struct sk_buff *skb;22172217+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);2221221822222219 hdr = (struct ieee80211_hdr *)rx->skb->data;22232220 fc = hdr->frame_control;···22442217 I802_DEBUG_INC(rx->local->dot11MulticastReceivedFrameCount);22452218 goto out_no_led;22462219 }22202220+22212221+ if (rx->sta)22222222+ cache = &rx->sta->frags;2247222322482224 if (likely(!ieee80211_has_morefrags(fc) && frag == 0))22492225 goto out;···2266223622672237 if (frag == 0) {22682238 /* This is the first fragment of a new frame. */22692269- entry = ieee80211_reassemble_add(rx->sdata, frag, seq,22392239+ entry = ieee80211_reassemble_add(cache, frag, seq,22702240 rx->seqno_idx, &(rx->skb));22712271- if (rx->key &&22722272- (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||22732273- rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||22742274- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||22752275- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&22762276- ieee80211_has_protected(fc)) {22412241+ if (requires_sequential_pn(rx, fc)) {22772242 int queue = rx->security_idx;2278224322792244 /* Store CCMP/GCMP PN so that we can verify that the22802245 * next fragment has a sequential PN value.22812246 */22822247 entry->check_sequential_pn = true;22482248+ entry->is_protected = true;22492249+ entry->key_color = rx->key->color;22832250 memcpy(entry->last_pn,22842251 rx->key->u.ccmp.rx_pn[queue],22852252 IEEE80211_CCMP_PN_LEN);···22882261 sizeof(rx->key->u.gcmp.rx_pn[queue]));22892262 BUILD_BUG_ON(IEEE80211_CCMP_PN_LEN !=22902263 IEEE80211_GCMP_PN_LEN);22642264+ } else if (rx->key &&22652265+ (ieee80211_has_protected(fc) ||22662266+ (status->flag & RX_FLAG_DECRYPTED))) {22672267+ entry->is_protected = true;22682268+ entry->key_color = rx->key->color;22912269 }22922270 return RX_QUEUED;22932271 }···23002268 /* This is a fragment for a frame that should already be pending in23012269 * fragment cache. Add this fragment to the end of the pending entry.23022270 */23032303- entry = ieee80211_reassemble_find(rx->sdata, frag, seq,22712271+ entry = ieee80211_reassemble_find(cache, frag, seq,23042272 rx->seqno_idx, hdr);23052273 if (!entry) {23062274 I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);···23152283 if (entry->check_sequential_pn) {23162284 int i;23172285 u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;23182318- int queue;2319228623202320- if (!rx->key ||23212321- (rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP &&23222322- rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP_256 &&23232323- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP &&23242324- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP_256))22872287+ if (!requires_sequential_pn(rx, fc))23252288 return RX_DROP_UNUSABLE;22892289+22902290+ /* Prevent mixed key and fragment cache attacks */22912291+ if (entry->key_color != rx->key->color)22922292+ return RX_DROP_UNUSABLE;22932293+23262294 memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);23272295 for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {23282296 pn[i]++;23292297 if (pn[i])23302298 break;23312299 }23322332- queue = rx->security_idx;23332333- rpn = rx->key->u.ccmp.rx_pn[queue];23002300+23012301+ rpn = rx->ccm_gcm.pn;23342302 if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))23352303 return RX_DROP_UNUSABLE;23362304 memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);23052305+ } else if (entry->is_protected &&23062306+ (!rx->key ||23072307+ (!ieee80211_has_protected(fc) &&23082308+ !(status->flag & RX_FLAG_DECRYPTED)) ||23092309+ rx->key->color != entry->key_color)) {23102310+ /* Drop this as a mixed key or fragment cache attack, even23112311+ * if for TKIP Michael MIC should protect us, and WEP is a23122312+ * lost cause anyway.23132313+ */23142314+ return RX_DROP_UNUSABLE;23152315+ } else if (entry->is_protected && rx->key &&23162316+ entry->key_color != rx->key->color &&23172317+ (status->flag & RX_FLAG_DECRYPTED)) {23182318+ return RX_DROP_UNUSABLE;23372319 }2338232023392321 skb_pull(rx->skb, ieee80211_hdrlen(fc));···25402494 struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data;2541249525422496 /*25432543- * Allow EAPOL frames to us/the PAE group address regardless25442544- * of whether the frame was encrypted or not.24972497+ * Allow EAPOL frames to us/the PAE group address regardless of24982498+ * whether the frame was encrypted or not, and always disallow24992499+ * all other destination addresses for them.25452500 */25462546- if (ehdr->h_proto == rx->sdata->control_port_protocol &&25472547- (ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||25482548- ether_addr_equal(ehdr->h_dest, pae_group_addr)))25492549- return true;25012501+ if (unlikely(ehdr->h_proto == rx->sdata->control_port_protocol))25022502+ return ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||25032503+ ether_addr_equal(ehdr->h_dest, pae_group_addr);2550250425512505 if (ieee80211_802_1x_port_control(rx) ||25522506 ieee80211_drop_unencrypted(rx, fc))···25712525 cfg80211_rx_control_port(dev, skb, noencrypt);25722526 dev_kfree_skb(skb);25732527 } else {25282528+ struct ethhdr *ehdr = (void *)skb_mac_header(skb);25292529+25742530 memset(skb->cb, 0, sizeof(skb->cb));25312531+25322532+ /*25332533+ * 802.1X over 802.11 requires that the authenticator address25342534+ * be used for EAPOL frames. However, 802.1X allows the use of25352535+ * the PAE group address instead. If the interface is part of25362536+ * a bridge and we pass the frame with the PAE group address,25372537+ * then the bridge will forward it to the network (even if the25382538+ * client was not associated yet), which isn't supposed to25392539+ * happen.25402540+ * To avoid that, rewrite the destination address to our own25412541+ * address, so that the authenticator (e.g. hostapd) will see25422542+ * the frame, but bridge won't forward it anywhere else. Note25432543+ * that due to earlier filtering, the only other address can25442544+ * be the PAE group address.25452545+ */25462546+ if (unlikely(skb->protocol == sdata->control_port_protocol &&25472547+ !ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))25482548+ ether_addr_copy(ehdr->h_dest, sdata->vif.addr);2575254925762550 /* deliver to local stack */25772551 if (rx->list)···26322566 if ((sdata->vif.type == NL80211_IFTYPE_AP ||26332567 sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&26342568 !(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) &&25692569+ ehdr->h_proto != rx->sdata->control_port_protocol &&26352570 (sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->u.vlan.sta)) {26362571 if (is_multicast_ether_addr(ehdr->h_dest) &&26372572 ieee80211_vif_get_num_mcast_if(sdata) != 0) {···27422675 if (ieee80211_data_to_8023_exthdr(skb, ðhdr,27432676 rx->sdata->vif.addr,27442677 rx->sdata->vif.type,27452745- data_offset))26782678+ data_offset, true))27462679 return RX_DROP_UNUSABLE;2747268027482681 ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr,···2798273127992732 if (is_multicast_ether_addr(hdr->addr1))28002733 return RX_DROP_UNUSABLE;27342734+27352735+ if (rx->key) {27362736+ /*27372737+ * We should not receive A-MSDUs on pre-HT connections,27382738+ * and HT connections cannot use old ciphers. Thus drop27392739+ * them, as in those cases we couldn't even have SPP27402740+ * A-MSDUs or such.27412741+ */27422742+ switch (rx->key->conf.cipher) {27432743+ case WLAN_CIPHER_SUITE_WEP40:27442744+ case WLAN_CIPHER_SUITE_WEP104:27452745+ case WLAN_CIPHER_SUITE_TKIP:27462746+ return RX_DROP_UNUSABLE;27472747+ default:27482748+ break;27492749+ }27502750+ }2801275128022752 return __ieee80211_rx_h_amsdu(rx, 0);28032753}
···33 * Copyright 2002-2005, Devicescape Software, Inc.44 * Copyright 2013-2014 Intel Mobile Communications GmbH55 * Copyright(c) 2015-2017 Intel Deutschland GmbH66- * Copyright(c) 2020 Intel Corporation66+ * Copyright(c) 2020-2021 Intel Corporation77 */8899#ifndef STA_INFO_H···439439};440440441441/*442442+ * IEEE 802.11-2016 (10.6 "Defragmentation") recommends support for "concurrent443443+ * reception of at least one MSDU per access category per associated STA"444444+ * on APs, or "at least one MSDU per access category" on other interface types.445445+ *446446+ * This limit can be increased by changing this define, at the cost of slower447447+ * frame reassembly and increased memory use while fragments are pending.448448+ */449449+#define IEEE80211_FRAGMENT_MAX 4450450+451451+struct ieee80211_fragment_entry {452452+ struct sk_buff_head skb_list;453453+ unsigned long first_frag_time;454454+ u16 seq;455455+ u16 extra_len;456456+ u16 last_frag;457457+ u8 rx_queue;458458+ u8 check_sequential_pn:1, /* needed for CCMP/GCMP */459459+ is_protected:1;460460+ u8 last_pn[6]; /* PN of the last fragment if CCMP was used */461461+ unsigned int key_color;462462+};463463+464464+struct ieee80211_fragment_cache {465465+ struct ieee80211_fragment_entry entries[IEEE80211_FRAGMENT_MAX];466466+ unsigned int next;467467+};468468+469469+/*442470 * The bandwidth threshold below which the per-station CoDel parameters will be443471 * scaled to be more lenient (to prevent starvation of slow stations). This444472 * value will be scaled by the number of active stations when it is being···559531 * @status_stats.last_ack_signal: last ACK signal560532 * @status_stats.ack_signal_filled: last ACK signal validity561533 * @status_stats.avg_ack_signal: average ACK signal534534+ * @frags: fragment cache562535 */563536struct sta_info {564537 /* General information, mostly static */···667638 u8 reserved_tid;668639669640 struct cfg80211_chan_def tdls_chandef;641641+642642+ struct ieee80211_fragment_cache frags;670643671644 /* keep last! */672645 struct ieee80211_sta sta;
+9-4
net/mac80211/wpa.c
···33 * Copyright 2002-2004, Instant802 Networks, Inc.44 * Copyright 2008, Jouni Malinen <j@w1.fi>55 * Copyright (C) 2016-2017 Intel Deutschland GmbH66+ * Copyright (C) 2020-2021 Intel Corporation67 */7889#include <linux/netdevice.h>···168167169168update_iv:170169 /* update IV in key information to be able to detect replays */171171- rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip_iv32;172172- rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip_iv16;170170+ rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip.iv32;171171+ rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip.iv16;173172174173 return RX_CONTINUE;175174···295294 key, skb->data + hdrlen,296295 skb->len - hdrlen, rx->sta->sta.addr,297296 hdr->addr1, hwaccel, rx->security_idx,298298- &rx->tkip_iv32,299299- &rx->tkip_iv16);297297+ &rx->tkip.iv32,298298+ &rx->tkip.iv16);300299 if (res != TKIP_DECRYPT_OK)301300 return RX_DROP_UNUSABLE;302301···554553 }555554556555 memcpy(key->u.ccmp.rx_pn[queue], pn, IEEE80211_CCMP_PN_LEN);556556+ if (unlikely(ieee80211_is_frag(hdr)))557557+ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);557558 }558559559560 /* Remove CCMP header and MIC */···784781 }785782786783 memcpy(key->u.gcmp.rx_pn[queue], pn, IEEE80211_GCMP_PN_LEN);784784+ if (unlikely(ieee80211_is_frag(hdr)))785785+ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);787786 }788787789788 /* Remove GCMP header and MIC */
···879879 !mpext->frozen;880880}881881882882+/* we can append data to the given data frag if:883883+ * - there is space available in the backing page_frag884884+ * - the data frag tail matches the current page_frag free offset885885+ * - the data frag end sequence number matches the current write seq886886+ */882887static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk,883888 const struct page_frag *pfrag,884889 const struct mptcp_data_frag *df)885890{886891 return df && pfrag->page == df->page &&887892 pfrag->size - pfrag->offset > 0 &&893893+ pfrag->offset == (df->offset + df->data_len) &&888894 df->data_seq + df->data_len == msk->write_seq;889895}890896···24242418 timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0);24252419 timer_setup(&sk->sk_timer, mptcp_timeout_timer, 0);2426242024272427- tcp_assign_congestion_control(sk);24282428-24292421 return 0;24302422}2431242324322424static int mptcp_init_sock(struct sock *sk)24332425{24262426+ struct inet_connection_sock *icsk = inet_csk(sk);24342427 struct net *net = sock_net(sk);24352428 int ret;24362429···24462441 ret = __mptcp_socket_create(mptcp_sk(sk));24472442 if (ret)24482443 return ret;24442444+24452445+ /* fetch the ca name; do it outside __mptcp_init_sock(), so that clone will24462446+ * propagate the correct value24472447+ */24482448+ tcp_assign_congestion_control(sk);24492449+ strcpy(mptcp_sk(sk)->ca_name, icsk->icsk_ca_ops->name);24502450+24512451+ /* no need to keep a reference to the ops, the name will suffice */24522452+ tcp_cleanup_congestion_control(sk);24532453+ icsk->icsk_ca_ops = NULL;2449245424502455 sk_sockets_allocated_inc(sk);24512456 sk->sk_rcvbuf = sock_net(sk)->ipv4.sysctl_tcp_rmem[1];···26312616 sk_stream_kill_queues(sk);26322617 xfrm_sk_free_policy(sk);2633261826342634- tcp_cleanup_congestion_control(sk);26352619 sk_refcnt_debug_release(sk);26362620 mptcp_dispose_initial_subflow(msk);26372621 sock_put(sk);
···178178179179int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,180180 union nft_pipapo_map_bucket *mt, bool match_only);181181+bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,182182+ const u32 *key, const struct nft_set_ext **ext);181183182184/**183185 * pipapo_and_field_buckets_4bit() - Intersect 4-bit buckets
+3
net/netfilter/nft_set_pipapo_avx2.c
···11311131 bool map_index;11321132 int i, ret = 0;1133113311341134+ if (unlikely(!irq_fpu_usable()))11351135+ return nft_pipapo_lookup(net, set, key, ext);11361136+11341137 m = rcu_dereference(priv->match);1135113811361139 /* This also protects access to all data related to scratch maps */
+4-2
net/netlink/af_netlink.c
···461461static inline void462462netlink_lock_table(void)463463{464464+ unsigned long flags;465465+464466 /* read_lock() synchronizes us to netlink_table_grab */465467466466- read_lock(&nl_table_lock);468468+ read_lock_irqsave(&nl_table_lock, flags);467469 atomic_inc(&nl_table_users);468468- read_unlock(&nl_table_lock);470470+ read_unlock_irqrestore(&nl_table_lock, flags);469471}470472471473static inline void
···329329 return -ESOCKTNOSUPPORT;330330331331 if (sock->type == SOCK_RAW) {332332- if (!capable(CAP_NET_RAW))332332+ if (!ns_capable(net->user_ns, CAP_NET_RAW))333333 return -EPERM;334334 sock->ops = &rawsock_raw_ops;335335 } else {
+8
net/openvswitch/meter.c
···611611 spin_lock(&meter->lock);612612613613 long_delta_ms = (now_ms - meter->used); /* ms */614614+ if (long_delta_ms < 0) {615615+ /* This condition means that we have several threads fighting616616+ * for a meter lock, and the one who received the packets a617617+ * bit later wins. Assuming that all racing threads received618618+ * packets at the same time to avoid overflow.619619+ */620620+ long_delta_ms = 0;621621+ }614622615623 /* Make sure delta_ms will not be too large, so that bucket will not616624 * wrap around below.
+8-2
net/packet/af_packet.c
···422422 ktime_to_timespec64_cond(shhwtstamps->hwtstamp, ts))423423 return TP_STATUS_TS_RAW_HARDWARE;424424425425- if (ktime_to_timespec64_cond(skb->tstamp, ts))425425+ if ((flags & SOF_TIMESTAMPING_SOFTWARE) &&426426+ ktime_to_timespec64_cond(skb->tstamp, ts))426427 return TP_STATUS_TS_SOFTWARE;427428428429 return 0;···2341234023422341 skb_copy_bits(skb, 0, h.raw + macoff, snaplen);2343234223442344- if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp)))23432343+ /* Always timestamp; prefer an existing software timestamp taken23442344+ * closer to the time of capture.23452345+ */23462346+ ts_status = tpacket_get_timestamp(skb, &ts,23472347+ po->tp_tstamp | SOF_TIMESTAMPING_SOFTWARE);23482348+ if (!ts_status)23452349 ktime_get_real_ts64(&ts);2346235023472351 status |= ts_status;
+17-6
net/rds/connection.c
···240240 if (loop_trans) {241241 rds_trans_put(loop_trans);242242 conn->c_loopback = 1;243243- if (is_outgoing && trans->t_prefer_loopback) {244244- /* "outgoing" connection - and the transport245245- * says it wants the connection handled by the246246- * loopback transport. This is what TCP does.247247- */248248- trans = &rds_loop_transport;243243+ if (trans->t_prefer_loopback) {244244+ if (likely(is_outgoing)) {245245+ /* "outgoing" connection to local address.246246+ * Protocol says it wants the connection247247+ * handled by the loopback transport.248248+ * This is what TCP does.249249+ */250250+ trans = &rds_loop_transport;251251+ } else {252252+ /* No transport currently in use253253+ * should end up here, but if it254254+ * does, reset/destroy the connection.255255+ */256256+ kmem_cache_free(rds_conn_slab, conn);257257+ conn = ERR_PTR(-EOPNOTSUPP);258258+ goto out;259259+ }249260 }250261 }251262
+2-2
net/rds/tcp.c
···313313}314314#endif315315316316-static int rds_tcp_laddr_check(struct net *net, const struct in6_addr *addr,317317- __u32 scope_id)316316+int rds_tcp_laddr_check(struct net *net, const struct in6_addr *addr,317317+ __u32 scope_id)318318{319319 struct net_device *dev = NULL;320320#if IS_ENABLED(CONFIG_IPV6)
···167167 }168168#endif169169170170+ if (!rds_tcp_laddr_check(sock_net(sock->sk), peer_addr, dev_if)) {171171+ /* local address connection is only allowed via loopback */172172+ ret = -EOPNOTSUPP;173173+ goto out;174174+ }175175+170176 conn = rds_conn_create(sock_net(sock->sk),171177 my_addr, peer_addr,172178 &rds_tcp_transport, 0, GFP_KERNEL, dev_if);
+1-1
net/sched/cls_api.c
···1624162416251625 /* If we missed on some chain */16261626 if (ret == TC_ACT_UNSPEC && last_executed_chain) {16271627- ext = skb_ext_add(skb, TC_SKB_EXT);16271627+ ext = tc_skb_ext_alloc(skb);16281628 if (WARN_ON_ONCE(!ext))16291629 return TC_ACT_SHOT;16301630 ext->chain = last_executed_chain;
···16771677 return;16781678 }1679167916801680- /*16811681- * Even though there was an error, we may have acquired16821682- * a request slot somehow. Make sure not to leak it.16831683- */16841684- if (task->tk_rqstp)16851685- xprt_release(task);16861686-16871680 switch (status) {16881681 case -ENOMEM:16891682 rpc_delay(task, HZ >> 2);
···628628 return false;629629}630630631631-/* The tail iovec might not reside in the same page as the632632- * head iovec.631631+/* The tail iovec may include an XDR pad for the page list,632632+ * as well as additional content, and may not reside in the633633+ * same page as the head iovec.633634 */634635static bool rpcrdma_prepare_tail_iov(struct rpcrdma_req *req,635636 struct xdr_buf *xdr,···748747 struct rpcrdma_req *req,749748 struct xdr_buf *xdr)750749{751751- struct kvec *tail = &xdr->tail[0];752752-753750 if (!rpcrdma_prepare_head_iov(r_xprt, req, xdr->head[0].iov_len))754751 return false;755752756756- /* If there is a Read chunk, the page list is handled753753+ /* If there is a Read chunk, the page list is being handled757754 * via explicit RDMA, and thus is skipped here.758755 */759756760760- if (tail->iov_len) {761761- if (!rpcrdma_prepare_tail_iov(req, xdr,762762- offset_in_page(tail->iov_base),763763- tail->iov_len))757757+ /* Do not include the tail if it is only an XDR pad */758758+ if (xdr->tail[0].iov_len > 3) {759759+ unsigned int page_base, len;760760+761761+ /* If the content in the page list is an odd length,762762+ * xdr_write_pages() adds a pad at the beginning of763763+ * the tail iovec. Force the tail's non-pad content to764764+ * land at the next XDR position in the Send message.765765+ */766766+ page_base = offset_in_page(xdr->tail[0].iov_base);767767+ len = xdr->tail[0].iov_len;768768+ page_base += len & 3;769769+ len -= len & 3;770770+ if (!rpcrdma_prepare_tail_iov(req, xdr, page_base, len))764771 return false;765772 kref_get(&req->rl_kref);766773 }
···408408 },409409 {},410410};411411+MODULE_DEVICE_TABLE(of, sti_sas_dev_match);411412412413static int sti_sas_driver_probe(struct platform_device *pdev)413414{
+1
sound/soc/fsl/Kconfig
···119119 tristate "NXP Audio Base On RPMSG support"120120 depends on COMMON_CLK121121 depends on RPMSG122122+ depends on SND_IMX_SOC || SND_IMX_SOC = n122123 select SND_SOC_IMX_RPMSG if SND_IMX_SOC != n123124 help124125 Say Y if you want to add rpmsg audio support for the Freescale CPUs.
+29-28
sound/soc/generic/audio-graph-card.c
···209209static int graph_parse_node(struct asoc_simple_priv *priv,210210 struct device_node *ep,211211 struct link_info *li,212212- int is_cpu)212212+ int *cpu)213213{214214 struct device *dev = simple_priv_to_dev(priv);215215 struct device_node *top = dev->of_node;···217217 struct simple_dai_props *dai_props = simple_priv_to_props(priv, li->link);218218 struct snd_soc_dai_link_component *dlc;219219 struct asoc_simple_dai *dai;220220- int ret, single = 0;220220+ int ret;221221222222- if (is_cpu) {222222+ if (cpu) {223223 dlc = asoc_link_to_cpu(dai_link, 0);224224 dai = simple_props_to_dai_cpu(dai_props, 0);225225 } else {···229229230230 graph_parse_mclk_fs(top, ep, dai_props);231231232232- ret = asoc_simple_parse_dai(ep, dlc, &single);232232+ ret = asoc_simple_parse_dai(ep, dlc, cpu);233233 if (ret < 0)234234 return ret;235235···240240 ret = asoc_simple_parse_clk(dev, ep, dai, dlc);241241 if (ret < 0)242242 return ret;243243-244244- if (is_cpu)245245- asoc_simple_canonicalize_cpu(dlc, single);246243247244 return 0;248245}···273276 struct link_info *li)274277{275278 struct device *dev = simple_priv_to_dev(priv);276276- struct snd_soc_card *card = simple_priv_to_card(priv);277279 struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);278280 struct simple_dai_props *dai_props = simple_priv_to_props(priv, li->link);279281 struct device_node *top = dev->of_node;280282 struct device_node *ep = li->cpu ? cpu_ep : codec_ep;281281- struct device_node *port;282282- struct device_node *ports;283283- struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);284284- struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);285283 char dai_name[64];286284 int ret;287287-288288- port = of_get_parent(ep);289289- ports = of_get_parent(port);290285291286 dev_dbg(dev, "link_of DPCM (%pOF)\n", ep);292287293288 if (li->cpu) {289289+ struct snd_soc_card *card = simple_priv_to_card(priv);290290+ struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);291291+ int is_single_links = 0;292292+294293 /* Codec is dummy */295294296295 /* FE settings */297296 dai_link->dynamic = 1;298297 dai_link->dpcm_merged_format = 1;299298300300- ret = graph_parse_node(priv, cpu_ep, li, 1);299299+ ret = graph_parse_node(priv, cpu_ep, li, &is_single_links);301300 if (ret)302302- goto out_put_node;301301+ return ret;303302304303 snprintf(dai_name, sizeof(dai_name),305304 "fe.%pOFP.%s", cpus->of_node, cpus->dai_name);···311318 */312319 if (card->component_chaining && !soc_component_is_pcm(cpus))313320 dai_link->no_pcm = 1;321321+322322+ asoc_simple_canonicalize_cpu(cpus, is_single_links);314323 } else {315315- struct snd_soc_codec_conf *cconf;324324+ struct snd_soc_codec_conf *cconf = simple_props_to_codec_conf(dai_props, 0);325325+ struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);326326+ struct device_node *port;327327+ struct device_node *ports;316328317329 /* CPU is dummy */318330···325327 dai_link->no_pcm = 1;326328 dai_link->be_hw_params_fixup = asoc_simple_be_hw_params_fixup;327329328328- cconf = simple_props_to_codec_conf(dai_props, 0);329329-330330- ret = graph_parse_node(priv, codec_ep, li, 0);330330+ ret = graph_parse_node(priv, codec_ep, li, NULL);331331 if (ret < 0)332332- goto out_put_node;332332+ return ret;333333334334 snprintf(dai_name, sizeof(dai_name),335335 "be.%pOFP.%s", codecs->of_node, codecs->dai_name);336336337337 /* check "prefix" from top node */338338+ port = of_get_parent(ep);339339+ ports = of_get_parent(port);338340 snd_soc_of_parse_node_prefix(top, cconf, codecs->of_node,339341 "prefix");340342 if (of_node_name_eq(ports, "ports"))341343 snd_soc_of_parse_node_prefix(ports, cconf, codecs->of_node, "prefix");342344 snd_soc_of_parse_node_prefix(port, cconf, codecs->of_node,343345 "prefix");346346+347347+ of_node_put(ports);348348+ of_node_put(port);344349 }345350346351 graph_parse_convert(dev, ep, &dai_props->adata);···352351353352 ret = graph_link_init(priv, cpu_ep, codec_ep, li, dai_name);354353355355-out_put_node:356354 li->link++;357355358358- of_node_put(ports);359359- of_node_put(port);360356 return ret;361357}362358···367369 struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);368370 struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);369371 char dai_name[64];370370- int ret;372372+ int ret, is_single_links = 0;371373372374 dev_dbg(dev, "link_of (%pOF)\n", cpu_ep);373375374374- ret = graph_parse_node(priv, cpu_ep, li, 1);376376+ ret = graph_parse_node(priv, cpu_ep, li, &is_single_links);375377 if (ret < 0)376378 return ret;377379378378- ret = graph_parse_node(priv, codec_ep, li, 0);380380+ ret = graph_parse_node(priv, codec_ep, li, NULL);379381 if (ret < 0)380382 return ret;381383382384 snprintf(dai_name, sizeof(dai_name),383385 "%s-%s", cpus->dai_name, codecs->dai_name);386386+387387+ asoc_simple_canonicalize_cpu(cpus, is_single_links);388388+384389 ret = graph_link_init(priv, cpu_ep, codec_ep, li, dai_name);385390 if (ret < 0)386391 return ret;
+83-85
sound/soc/generic/simple-card.c
···9393}94949595static void simple_parse_mclk_fs(struct device_node *top,9696- struct device_node *cpu,9797- struct device_node *codec,9696+ struct device_node *np,9897 struct simple_dai_props *props,9998 char *prefix)10099{101101- struct device_node *node = of_get_parent(cpu);100100+ struct device_node *node = of_get_parent(np);102101 char prop[128];103102104103 snprintf(prop, sizeof(prop), "%smclk-fs", PREFIX);···105106106107 snprintf(prop, sizeof(prop), "%smclk-fs", prefix);107108 of_property_read_u32(node, prop, &props->mclk_fs);108108- of_property_read_u32(cpu, prop, &props->mclk_fs);109109- of_property_read_u32(codec, prop, &props->mclk_fs);109109+ of_property_read_u32(np, prop, &props->mclk_fs);110110111111 of_node_put(node);112112+}113113+114114+static int simple_parse_node(struct asoc_simple_priv *priv,115115+ struct device_node *np,116116+ struct link_info *li,117117+ char *prefix,118118+ int *cpu)119119+{120120+ struct device *dev = simple_priv_to_dev(priv);121121+ struct device_node *top = dev->of_node;122122+ struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);123123+ struct simple_dai_props *dai_props = simple_priv_to_props(priv, li->link);124124+ struct snd_soc_dai_link_component *dlc;125125+ struct asoc_simple_dai *dai;126126+ int ret;127127+128128+ if (cpu) {129129+ dlc = asoc_link_to_cpu(dai_link, 0);130130+ dai = simple_props_to_dai_cpu(dai_props, 0);131131+ } else {132132+ dlc = asoc_link_to_codec(dai_link, 0);133133+ dai = simple_props_to_dai_codec(dai_props, 0);134134+ }135135+136136+ simple_parse_mclk_fs(top, np, dai_props, prefix);137137+138138+ ret = asoc_simple_parse_dai(np, dlc, cpu);139139+ if (ret)140140+ return ret;141141+142142+ ret = asoc_simple_parse_clk(dev, np, dai, dlc);143143+ if (ret)144144+ return ret;145145+146146+ ret = asoc_simple_parse_tdm(np, dai);147147+ if (ret)148148+ return ret;149149+150150+ return 0;151151+}152152+153153+static int simple_link_init(struct asoc_simple_priv *priv,154154+ struct device_node *node,155155+ struct device_node *codec,156156+ struct link_info *li,157157+ char *prefix, char *name)158158+{159159+ struct device *dev = simple_priv_to_dev(priv);160160+ struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);161161+ int ret;162162+163163+ ret = asoc_simple_parse_daifmt(dev, node, codec,164164+ prefix, &dai_link->dai_fmt);165165+ if (ret < 0)166166+ return 0;167167+168168+ dai_link->init = asoc_simple_dai_init;169169+ dai_link->ops = &simple_ops;170170+171171+ return asoc_simple_set_dailink_name(dev, dai_link, name);112172}113173114174static int simple_dai_link_of_dpcm(struct asoc_simple_priv *priv,···179121 struct device *dev = simple_priv_to_dev(priv);180122 struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);181123 struct simple_dai_props *dai_props = simple_priv_to_props(priv, li->link);182182- struct asoc_simple_dai *dai;183183- struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);184184- struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);185185- struct snd_soc_dai_link_component *platforms = asoc_link_to_platform(dai_link, 0);186124 struct device_node *top = dev->of_node;187125 struct device_node *node = of_get_parent(np);188126 char *prefix = "";127127+ char dai_name[64];189128 int ret;190129191130 dev_dbg(dev, "link_of DPCM (%pOF)\n", np);192192-193193- li->link++;194131195132 /* For single DAI link & old style of DT node */196133 if (is_top)197134 prefix = PREFIX;198135199136 if (li->cpu) {137137+ struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);138138+ struct snd_soc_dai_link_component *platforms = asoc_link_to_platform(dai_link, 0);200139 int is_single_links = 0;201140202141 /* Codec is dummy */···202147 dai_link->dynamic = 1;203148 dai_link->dpcm_merged_format = 1;204149205205- dai = simple_props_to_dai_cpu(dai_props, 0);206206-207207- ret = asoc_simple_parse_dai(np, cpus, &is_single_links);208208- if (ret)209209- goto out_put_node;210210-211211- ret = asoc_simple_parse_clk(dev, np, dai, cpus);150150+ ret = simple_parse_node(priv, np, li, prefix, &is_single_links);212151 if (ret < 0)213152 goto out_put_node;214153215215- ret = asoc_simple_set_dailink_name(dev, dai_link,216216- "fe.%s",217217- cpus->dai_name);218218- if (ret < 0)219219- goto out_put_node;154154+ snprintf(dai_name, sizeof(dai_name), "fe.%s", cpus->dai_name);220155221156 asoc_simple_canonicalize_cpu(cpus, is_single_links);222157 asoc_simple_canonicalize_platform(platforms, cpus);223158 } else {159159+ struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);224160 struct snd_soc_codec_conf *cconf;225161226162 /* CPU is dummy */···220174 dai_link->no_pcm = 1;221175 dai_link->be_hw_params_fixup = asoc_simple_be_hw_params_fixup;222176223223- dai = simple_props_to_dai_codec(dai_props, 0);224177 cconf = simple_props_to_codec_conf(dai_props, 0);225178226226- ret = asoc_simple_parse_dai(np, codecs, NULL);179179+ ret = simple_parse_node(priv, np, li, prefix, NULL);227180 if (ret < 0)228181 goto out_put_node;229182230230- ret = asoc_simple_parse_clk(dev, np, dai, codecs);231231- if (ret < 0)232232- goto out_put_node;233233-234234- ret = asoc_simple_set_dailink_name(dev, dai_link,235235- "be.%s",236236- codecs->dai_name);237237- if (ret < 0)238238- goto out_put_node;183183+ snprintf(dai_name, sizeof(dai_name), "be.%s", codecs->dai_name);239184240185 /* check "prefix" from top node */241186 snd_soc_of_parse_node_prefix(top, cconf, codecs->of_node,···238201 }239202240203 simple_parse_convert(dev, np, &dai_props->adata);241241- simple_parse_mclk_fs(top, np, codec, dai_props, prefix);242242-243243- ret = asoc_simple_parse_tdm(np, dai);244244- if (ret)245245- goto out_put_node;246246-247247- ret = asoc_simple_parse_daifmt(dev, node, codec,248248- prefix, &dai_link->dai_fmt);249249- if (ret < 0)250250- goto out_put_node;251204252205 snd_soc_dai_link_set_capabilities(dai_link);253206254254- dai_link->ops = &simple_ops;255255- dai_link->init = asoc_simple_dai_init;207207+ ret = simple_link_init(priv, node, codec, li, prefix, dai_name);256208257209out_put_node:210210+ li->link++;211211+258212 of_node_put(node);259213 return ret;260214}···258230{259231 struct device *dev = simple_priv_to_dev(priv);260232 struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);261261- struct simple_dai_props *dai_props = simple_priv_to_props(priv, li->link);262262- struct asoc_simple_dai *cpu_dai = simple_props_to_dai_cpu(dai_props, 0);263263- struct asoc_simple_dai *codec_dai = simple_props_to_dai_codec(dai_props, 0);264233 struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);265234 struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);266235 struct snd_soc_dai_link_component *platforms = asoc_link_to_platform(dai_link, 0);267267- struct device_node *top = dev->of_node;268236 struct device_node *cpu = NULL;269237 struct device_node *node = NULL;270238 struct device_node *plat = NULL;239239+ char dai_name[64];271240 char prop[128];272241 char *prefix = "";273242 int ret, single_cpu = 0;274243275244 cpu = np;276245 node = of_get_parent(np);277277- li->link++;278246279247 dev_dbg(dev, "link_of (%pOF)\n", node);280248···281257 snprintf(prop, sizeof(prop), "%splat", prefix);282258 plat = of_get_child_by_name(node, prop);283259284284- ret = asoc_simple_parse_daifmt(dev, node, codec,285285- prefix, &dai_link->dai_fmt);260260+ ret = simple_parse_node(priv, cpu, li, prefix, &single_cpu);286261 if (ret < 0)287262 goto dai_link_of_err;288263289289- simple_parse_mclk_fs(top, cpu, codec, dai_props, prefix);290290-291291- ret = asoc_simple_parse_dai(cpu, cpus, &single_cpu);292292- if (ret < 0)293293- goto dai_link_of_err;294294-295295- ret = asoc_simple_parse_dai(codec, codecs, NULL);264264+ ret = simple_parse_node(priv, codec, li, prefix, NULL);296265 if (ret < 0)297266 goto dai_link_of_err;298267···293276 if (ret < 0)294277 goto dai_link_of_err;295278296296- ret = asoc_simple_parse_tdm(cpu, cpu_dai);297297- if (ret < 0)298298- goto dai_link_of_err;299299-300300- ret = asoc_simple_parse_tdm(codec, codec_dai);301301- if (ret < 0)302302- goto dai_link_of_err;303303-304304- ret = asoc_simple_parse_clk(dev, cpu, cpu_dai, cpus);305305- if (ret < 0)306306- goto dai_link_of_err;307307-308308- ret = asoc_simple_parse_clk(dev, codec, codec_dai, codecs);309309- if (ret < 0)310310- goto dai_link_of_err;311311-312312- ret = asoc_simple_set_dailink_name(dev, dai_link,313313- "%s-%s",314314- cpus->dai_name,315315- codecs->dai_name);316316- if (ret < 0)317317- goto dai_link_of_err;318318-319319- dai_link->ops = &simple_ops;320320- dai_link->init = asoc_simple_dai_init;279279+ snprintf(dai_name, sizeof(dai_name),280280+ "%s-%s", cpus->dai_name, codecs->dai_name);321281322282 asoc_simple_canonicalize_cpu(cpus, single_cpu);323283 asoc_simple_canonicalize_platform(platforms, cpus);324284285285+ ret = simple_link_init(priv, node, codec, li, prefix, dai_name);286286+325287dai_link_of_err:326288 of_node_put(plat);327289 of_node_put(node);290290+291291+ li->link++;328292329293 return ret;330294}
···88 * Note: you must update KVM_API_VERSION if you change this interface.99 */10101111+#include <linux/const.h>1112#include <linux/types.h>1213#include <linux/compiler.h>1314#include <linux/ioctl.h>···18801879 * conversion after harvesting an entry. Also, it must not skip any18811880 * dirty bits, so that dirty bits are always harvested in sequence.18821881 */18831883-#define KVM_DIRTY_GFN_F_DIRTY BIT(0)18841884-#define KVM_DIRTY_GFN_F_RESET BIT(1)18821882+#define KVM_DIRTY_GFN_F_DIRTY _BITUL(0)18831883+#define KVM_DIRTY_GFN_F_RESET _BITUL(1)18851884#define KVM_DIRTY_GFN_F_MASK 0x31886188518871886/*
···4141#define ELF_C_READ_MMAP ELF_C_READ4242#endif43434444+/* Older libelf all end up in this expression, for both 32 and 64 bit */4545+#ifndef GELF_ST_VISIBILITY4646+#define GELF_ST_VISIBILITY(o) ((o) & 0x03)4747+#endif4848+4449#define BTF_INFO_ENC(kind, kind_flag, vlen) \4550 ((!!(kind_flag) << 31) | ((kind) << 24) | ((vlen) & BTF_MAX_VLEN))4651#define BTF_TYPE_ENC(name, info, size_or_type) (name), (info), (size_or_type)
+2-2
tools/perf/perf.c
···443443 const char *cmd;444444 char sbuf[STRERR_BUFSIZE];445445446446+ perf_debug_setup();447447+446448 /* libsubcmd init */447449 exec_cmd_init("perf", PREFIX, PERF_EXEC_PATH, EXEC_PATH_ENVIRONMENT);448450 pager_init(PERF_PAGER_ENVIRONMENT);···532530 * forever while the signal goes to some other non interested thread.533531 */534532 pthread__block_sigwinch();535535-536536- perf_debug_setup();537533538534 while (1) {539535 static int done_help;
···11[22 {33- "EventCode": "1003C",33+ "EventCode": "0x1003C",44 "EventName": "PM_EXEC_STALL_DMISS_L2L3",55 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from either the local L2 or local L3."66 },77 {88- "EventCode": "34056",99- "EventName": "PM_EXEC_STALL_LOAD_FINISH",1010- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was finishing a load after its data was reloaded from a data source beyond the local L1; cycles in which the LSU was processing an L1-hit; cycles in which the NTF instruction merged with another load in the LMQ."88+ "EventCode": "0x1E054",99+ "EventName": "PM_EXEC_STALL_DMISS_L21_L31",1010+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from another core's L2 or L3 on the same chip."1111 },1212 {1313- "EventCode": "3006C",1313+ "EventCode": "0x34054",1414+ "EventName": "PM_EXEC_STALL_DMISS_L2L3_NOCONFLICT",1515+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, without a dispatch conflict."1616+ },1717+ {1818+ "EventCode": "0x34056",1919+ "EventName": "PM_EXEC_STALL_LOAD_FINISH",2020+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was finishing a load after its data was reloaded from a data source beyond the local L1; cycles in which the LSU was processing an L1-hit; cycles in which the NTF instruction merged with another load in the LMQ; cycles in which the NTF instruction is waiting for a data reload for a load miss, but the data comes back with a non-NTF instruction."2121+ },2222+ {2323+ "EventCode": "0x3006C",1424 "EventName": "PM_RUN_CYC_SMT2_MODE",1525 "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT2 mode."1626 },1727 {1818- "EventCode": "300F4",2828+ "EventCode": "0x300F4",1929 "EventName": "PM_RUN_INST_CMPL_CONC",2030 "BriefDescription": "PowerPC instructions completed by this thread when all threads in the core had the run-latch set."2131 },2232 {2323- "EventCode": "4C016",3333+ "EventCode": "0x4C016",2434 "EventName": "PM_EXEC_STALL_DMISS_L2L3_CONFLICT",2535 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, with a dispatch conflict."2636 },2737 {2828- "EventCode": "4D014",3838+ "EventCode": "0x4D014",2939 "EventName": "PM_EXEC_STALL_LOAD",3040 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a load instruction executing in the Load Store Unit."3141 },3242 {3333- "EventCode": "4D016",4343+ "EventCode": "0x4D016",3444 "EventName": "PM_EXEC_STALL_PTESYNC",3545 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a PTESYNC instruction executing in the Load Store Unit."3646 },3747 {3838- "EventCode": "401EA",4848+ "EventCode": "0x401EA",3949 "EventName": "PM_THRESH_EXC_128",4050 "BriefDescription": "Threshold counter exceeded a value of 128."4151 },4252 {4343- "EventCode": "400F6",5353+ "EventCode": "0x400F6",4454 "EventName": "PM_BR_MPRED_CMPL",4555 "BriefDescription": "A mispredicted branch completed. Includes direction and target."4656 }
···11[22 {33- "EventCode": "10004",33+ "EventCode": "0x10004",44 "EventName": "PM_EXEC_STALL_TRANSLATION",55 "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss or ERAT miss and waited for it to resolve."66 },77 {88- "EventCode": "10010",88+ "EventCode": "0x10006",99+ "EventName": "PM_DISP_STALL_HELD_OTHER_CYC",1010+ "BriefDescription": "Cycles in which the NTC instruction is held at dispatch for any other reason."1111+ },1212+ {1313+ "EventCode": "0x10010",914 "EventName": "PM_PMC4_OVERFLOW",1015 "BriefDescription": "The event selected for PMC4 caused the event counter to overflow."1116 },1217 {1313- "EventCode": "10020",1818+ "EventCode": "0x10020",1419 "EventName": "PM_PMC4_REWIND",1520 "BriefDescription": "The speculative event selected for PMC4 rewinds and the counter for PMC4 is not charged."1621 },1722 {1818- "EventCode": "10038",2323+ "EventCode": "0x10038",1924 "EventName": "PM_DISP_STALL_TRANSLATION",2025 "BriefDescription": "Cycles when dispatch was stalled for this thread because the MMU was handling a translation miss."2126 },2227 {2323- "EventCode": "1003A",2828+ "EventCode": "0x1003A",2429 "EventName": "PM_DISP_STALL_BR_MPRED_IC_L2",2530 "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2 after suffering a branch mispredict."2631 },2732 {2828- "EventCode": "1E050",3333+ "EventCode": "0x1D05E",3434+ "EventName": "PM_DISP_STALL_HELD_HALT_CYC",3535+ "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of power management."3636+ },3737+ {3838+ "EventCode": "0x1E050",2939 "EventName": "PM_DISP_STALL_HELD_STF_MAPPER_CYC",3040 "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the STF mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR."3141 },3242 {3333- "EventCode": "1F054",4343+ "EventCode": "0x1F054",3444 "EventName": "PM_DTLB_HIT",3545 "BriefDescription": "The PTE required by the instruction was resident in the TLB (data TLB access). When MMCR1[16]=0 this event counts only demand hits. When MMCR1[16]=1 this event includes demand and prefetch. Applies to both HPT and RPT."3646 },3747 {3838- "EventCode": "101E8",4848+ "EventCode": "0x10064",4949+ "EventName": "PM_DISP_STALL_IC_L2",5050+ "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2."5151+ },5252+ {5353+ "EventCode": "0x101E8",3954 "EventName": "PM_THRESH_EXC_256",4055 "BriefDescription": "Threshold counter exceeded a count of 256."4156 },4257 {4343- "EventCode": "101EC",5858+ "EventCode": "0x101EC",4459 "EventName": "PM_THRESH_MET",4560 "BriefDescription": "Threshold exceeded."4661 },4762 {4848- "EventCode": "100F2",6363+ "EventCode": "0x100F2",4964 "EventName": "PM_1PLUS_PPC_CMPL",5065 "BriefDescription": "Cycles in which at least one instruction is completed by this thread."5166 },5267 {5353- "EventCode": "100F6",6868+ "EventCode": "0x100F6",5469 "EventName": "PM_IERAT_MISS",5570 "BriefDescription": "IERAT Reloaded to satisfy an IERAT miss. All page sizes are counted by this event."5671 },5772 {5858- "EventCode": "100F8",7373+ "EventCode": "0x100F8",5974 "EventName": "PM_DISP_STALL_CYC",6075 "BriefDescription": "Cycles the ICT has no itags assigned to this thread (no instructions were dispatched during these cycles)."6176 },6277 {6363- "EventCode": "20114",7878+ "EventCode": "0x20006",7979+ "EventName": "PM_DISP_STALL_HELD_ISSQ_FULL_CYC",8080+ "BriefDescription": "Cycles in which the NTC instruction is held at dispatch due to Issue queue full. Includes issue queue and branch queue."8181+ },8282+ {8383+ "EventCode": "0x20114",6484 "EventName": "PM_MRK_L2_RC_DISP",6585 "BriefDescription": "Marked instruction RC dispatched in L2."6686 },6787 {6868- "EventCode": "2C010",8888+ "EventCode": "0x2C010",6989 "EventName": "PM_EXEC_STALL_LSU",7090 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the Load Store Unit. This does not include simple fixed point instructions."7191 },7292 {7373- "EventCode": "2C016",9393+ "EventCode": "0x2C016",7494 "EventName": "PM_DISP_STALL_IERAT_ONLY_MISS",7595 "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction ERAT miss."7696 },7797 {7878- "EventCode": "2C01E",9898+ "EventCode": "0x2C01E",7999 "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3",80100 "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3 after suffering a branch mispredict."81101 },82102 {8383- "EventCode": "2D01A",103103+ "EventCode": "0x2D01A",84104 "EventName": "PM_DISP_STALL_IC_MISS",85105 "BriefDescription": "Cycles when dispatch was stalled for this thread due to an Icache Miss."86106 },87107 {8888- "EventCode": "2D01C",8989- "EventName": "PM_CMPL_STALL_STCX",9090- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a stcx waiting for resolution from the nest before completing."9191- },9292- {9393- "EventCode": "2E018",108108+ "EventCode": "0x2E018",94109 "EventName": "PM_DISP_STALL_FETCH",95110 "BriefDescription": "Cycles when dispatch was stalled for this thread because Fetch was being held."96111 },97112 {9898- "EventCode": "2E01A",113113+ "EventCode": "0x2E01A",99114 "EventName": "PM_DISP_STALL_HELD_XVFC_MAPPER_CYC",100115 "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the XVFC mapper/SRB was full."101116 },102117 {103103- "EventCode": "2C142",118118+ "EventCode": "0x2C142",104119 "EventName": "PM_MRK_XFER_FROM_SRC_PMC2",105120 "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[15:27]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."106121 },107122 {108108- "EventCode": "24050",123123+ "EventCode": "0x24050",109124 "EventName": "PM_IOPS_DISP",110125 "BriefDescription": "Internal Operations dispatched. PM_IOPS_DISP / PM_INST_DISP will show the average number of internal operations per PowerPC instruction."111126 },112127 {113113- "EventCode": "2405E",128128+ "EventCode": "0x2405E",114129 "EventName": "PM_ISSUE_CANCEL",115130 "BriefDescription": "An instruction issued and the issue was later cancelled. Only one cancel per PowerPC instruction."116131 },117132 {118118- "EventCode": "200FA",133133+ "EventCode": "0x200FA",119134 "EventName": "PM_BR_TAKEN_CMPL",120135 "BriefDescription": "Branch Taken instruction completed."121136 },122137 {123123- "EventCode": "30012",138138+ "EventCode": "0x30004",139139+ "EventName": "PM_DISP_STALL_FLUSH",140140+ "BriefDescription": "Cycles when dispatch was stalled because of a flush that happened to an instruction(s) that was not yet NTC. PM_EXEC_STALL_NTC_FLUSH only includes instructions that were flushed after becoming NTC."141141+ },142142+ {143143+ "EventCode": "0x3000A",144144+ "EventName": "PM_DISP_STALL_ITLB_MISS",145145+ "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction TLB miss."146146+ },147147+ {148148+ "EventCode": "0x30012",124149 "EventName": "PM_FLUSH_COMPLETION",125150 "BriefDescription": "The instruction that was next to complete (oldest in the pipeline) did not complete because it suffered a flush."126151 },127152 {128128- "EventCode": "30014",153153+ "EventCode": "0x30014",129154 "EventName": "PM_EXEC_STALL_STORE",130155 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store instruction executing in the Load Store Unit."131156 },132157 {133133- "EventCode": "30018",158158+ "EventCode": "0x30018",134159 "EventName": "PM_DISP_STALL_HELD_SCOREBOARD_CYC",135160 "BriefDescription": "Cycles in which the NTC instruction is held at dispatch while waiting on the Scoreboard. This event combines VSCR and FPSCR together."136161 },137162 {138138- "EventCode": "30026",163163+ "EventCode": "0x30026",139164 "EventName": "PM_EXEC_STALL_STORE_MISS",140165 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store whose cache line was not resident in the L1 and was waiting for allocation of the missing line into the L1."141166 },142167 {143143- "EventCode": "3012A",168168+ "EventCode": "0x3012A",144169 "EventName": "PM_MRK_L2_RC_DONE",145170 "BriefDescription": "L2 RC machine completed the transaction for the marked instruction."146171 },147172 {148148- "EventCode": "3F046",173173+ "EventCode": "0x3F046",149174 "EventName": "PM_ITLB_HIT_1G",150175 "BriefDescription": "Instruction TLB hit (IERAT reload) page size 1G, which implies Radix Page Table translation is in use. When MMCR1[17]=0 this event counts only for demand misses. When MMCR1[17]=1 this event includes demand misses and prefetches."151176 },152177 {153153- "EventCode": "34058",178178+ "EventCode": "0x34058",154179 "EventName": "PM_DISP_STALL_BR_MPRED_ICMISS",155180 "BriefDescription": "Cycles when dispatch was stalled after a mispredicted branch resulted in an instruction cache miss."156181 },157182 {158158- "EventCode": "3D05C",183183+ "EventCode": "0x3D05C",159184 "EventName": "PM_DISP_STALL_HELD_RENAME_CYC",160185 "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR and XVFC."161186 },162187 {163163- "EventCode": "3E052",188188+ "EventCode": "0x3E052",164189 "EventName": "PM_DISP_STALL_IC_L3",165190 "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3."166191 },167192 {168168- "EventCode": "3E054",193193+ "EventCode": "0x3E054",169194 "EventName": "PM_LD_MISS_L1",170195 "BriefDescription": "Load Missed L1, counted at execution time (can be greater than loads finished). LMQ merges are not included in this count. i.e. if a load instruction misses on an address that is already allocated on the LMQ, this event will not increment for that load). Note that this count is per slice, so if a load spans multiple slices this event will increment multiple times for a single load."171196 },172197 {173173- "EventCode": "301EA",198198+ "EventCode": "0x301EA",174199 "EventName": "PM_THRESH_EXC_1024",175200 "BriefDescription": "Threshold counter exceeded a value of 1024."176201 },177202 {178178- "EventCode": "300FA",203203+ "EventCode": "0x300FA",179204 "EventName": "PM_INST_FROM_L3MISS",180205 "BriefDescription": "The processor's instruction cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss."181206 },182207 {183183- "EventCode": "40006",208208+ "EventCode": "0x40006",184209 "EventName": "PM_ISSUE_KILL",185210 "BriefDescription": "Cycles in which an instruction or group of instructions were cancelled after being issued. This event increments once per occurrence, regardless of how many instructions are included in the issue group."186211 },187212 {188188- "EventCode": "40116",213213+ "EventCode": "0x40116",189214 "EventName": "PM_MRK_LARX_FIN",190215 "BriefDescription": "Marked load and reserve instruction (LARX) finished. LARX and STCX are instructions used to acquire a lock."191216 },192217 {193193- "EventCode": "4C010",218218+ "EventCode": "0x4C010",194219 "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3MISS",195220 "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from sources beyond the local L3 after suffering a mispredicted branch."196221 },197222 {198198- "EventCode": "4D01E",223223+ "EventCode": "0x4D01E",199224 "EventName": "PM_DISP_STALL_BR_MPRED",200225 "BriefDescription": "Cycles when dispatch was stalled for this thread due to a mispredicted branch."201226 },202227 {203203- "EventCode": "4E010",228228+ "EventCode": "0x4E010",204229 "EventName": "PM_DISP_STALL_IC_L3MISS",205230 "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from any source beyond the local L3."206231 },207232 {208208- "EventCode": "4E01A",233233+ "EventCode": "0x4E01A",209234 "EventName": "PM_DISP_STALL_HELD_CYC",210235 "BriefDescription": "Cycles in which the NTC instruction is held at dispatch for any reason."211236 },212237 {213213- "EventCode": "44056",238238+ "EventCode": "0x4003C",239239+ "EventName": "PM_DISP_STALL_HELD_SYNC_CYC",240240+ "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of a synchronizing instruction that requires the ICT to be empty before dispatch."241241+ },242242+ {243243+ "EventCode": "0x44056",214244 "EventName": "PM_VECTOR_ST_CMPL",215245 "BriefDescription": "Vector store instructions completed."216246 }
···11[22 {33- "EventCode": "1E058",33+ "EventCode": "0x1E058",44 "EventName": "PM_STCX_FAIL_FIN",55 "BriefDescription": "Conditional store instruction (STCX) failed. LARX and STCX are instructions used to acquire a lock."66 },77 {88- "EventCode": "4E050",88+ "EventCode": "0x4E050",99 "EventName": "PM_STCX_PASS_FIN",1010 "BriefDescription": "Conditional store instruction (STCX) passed. LARX and STCX are instructions used to acquire a lock."1111 }
···11[22 {33- "EventCode": "1002C",33+ "EventCode": "0x1002C",44 "EventName": "PM_LD_PREFETCH_CACHE_LINE_MISS",55 "BriefDescription": "The L1 cache was reloaded with a line that fulfills a prefetch request."66 },77 {88- "EventCode": "10132",88+ "EventCode": "0x10132",99 "EventName": "PM_MRK_INST_ISSUED",1010 "BriefDescription": "Marked instruction issued. Note that stores always get issued twice, the address gets issued to the LSU and the data gets issued to the VSU. Also, issues can sometimes get killed/cancelled and cause multiple sequential issues for the same instruction."1111 },1212 {1313- "EventCode": "101E0",1313+ "EventCode": "0x101E0",1414 "EventName": "PM_MRK_INST_DISP",1515 "BriefDescription": "The thread has dispatched a randomly sampled marked instruction."1616 },1717 {1818- "EventCode": "101E2",1818+ "EventCode": "0x101E2",1919 "EventName": "PM_MRK_BR_TAKEN_CMPL",2020 "BriefDescription": "Marked Branch Taken instruction completed."2121 },2222 {2323- "EventCode": "20112",2323+ "EventCode": "0x20112",2424 "EventName": "PM_MRK_NTF_FIN",2525 "BriefDescription": "The marked instruction became the oldest in the pipeline before it finished. It excludes instructions that finish at dispatch."2626 },2727 {2828- "EventCode": "2C01C",2828+ "EventCode": "0x2C01C",2929 "EventName": "PM_EXEC_STALL_DMISS_OFF_CHIP",3030 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a remote chip."3131 },3232 {3333- "EventCode": "20138",3333+ "EventCode": "0x20138",3434 "EventName": "PM_MRK_ST_NEST",3535 "BriefDescription": "A store has been sampled/marked and is at the point of execution where it has completed in the core and can no longer be flushed. At this point the store is sent to the L2."3636 },3737 {3838- "EventCode": "2013A",3838+ "EventCode": "0x2013A",3939 "EventName": "PM_MRK_BRU_FIN",4040 "BriefDescription": "Marked Branch instruction finished."4141 },4242 {4343- "EventCode": "2C144",4343+ "EventCode": "0x2C144",4444 "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC2",4545 "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[15:27]."4646 },4747 {4848- "EventCode": "24156",4848+ "EventCode": "0x24156",4949 "EventName": "PM_MRK_STCX_FIN",5050 "BriefDescription": "Marked conditional store instruction (STCX) finished. LARX and STCX are instructions used to acquire a lock."5151 },5252 {5353- "EventCode": "24158",5353+ "EventCode": "0x24158",5454 "EventName": "PM_MRK_INST",5555 "BriefDescription": "An instruction was marked. Includes both Random Instruction Sampling (RIS) at decode time and Random Event Sampling (RES) at the time the configured event happens."5656 },5757 {5858- "EventCode": "2415C",5858+ "EventCode": "0x2415C",5959 "EventName": "PM_MRK_BR_CMPL",6060 "BriefDescription": "A marked branch completed. All branches are included."6161 },6262 {6363- "EventCode": "200FD",6363+ "EventCode": "0x200FD",6464 "EventName": "PM_L1_ICACHE_MISS",6565 "BriefDescription": "Demand iCache Miss."6666 },6767 {6868- "EventCode": "30130",6868+ "EventCode": "0x30130",6969 "EventName": "PM_MRK_INST_FIN",7070 "BriefDescription": "marked instruction finished. Excludes instructions that finish at dispatch. Note that stores always finish twice since the address gets issued to the LSU and the data gets issued to the VSU."7171 },7272 {7373- "EventCode": "34146",7373+ "EventCode": "0x34146",7474 "EventName": "PM_MRK_LD_CMPL",7575 "BriefDescription": "Marked loads completed."7676 },7777 {7878- "EventCode": "3E158",7878+ "EventCode": "0x3E158",7979 "EventName": "PM_MRK_STCX_FAIL",8080 "BriefDescription": "Marked conditional store instruction (STCX) failed. LARX and STCX are instructions used to acquire a lock."8181 },8282 {8383- "EventCode": "3E15A",8383+ "EventCode": "0x3E15A",8484 "EventName": "PM_MRK_ST_FIN",8585 "BriefDescription": "The marked instruction was a store of any kind."8686 },8787 {8888- "EventCode": "30068",8888+ "EventCode": "0x30068",8989 "EventName": "PM_L1_ICACHE_RELOADED_PREF",9090 "BriefDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch)."9191 },9292 {9393- "EventCode": "301E4",9393+ "EventCode": "0x301E4",9494 "EventName": "PM_MRK_BR_MPRED_CMPL",9595 "BriefDescription": "Marked Branch Mispredicted. Includes direction and target."9696 },9797 {9898- "EventCode": "300F6",9898+ "EventCode": "0x300F6",9999 "EventName": "PM_LD_DEMAND_MISS_L1",100100 "BriefDescription": "The L1 cache was reloaded with a line that fulfills a demand miss request. Counted at reload time, before finish."101101 },102102 {103103- "EventCode": "300FE",103103+ "EventCode": "0x300FE",104104 "EventName": "PM_DATA_FROM_L3MISS",105105 "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss."106106 },107107 {108108- "EventCode": "40012",108108+ "EventCode": "0x40012",109109 "EventName": "PM_L1_ICACHE_RELOADED_ALL",110110 "BriefDescription": "Counts all Icache reloads includes demand, prefetch, prefetch turned into demand and demand turned into prefetch."111111 },112112 {113113- "EventCode": "40134",113113+ "EventCode": "0x40134",114114 "EventName": "PM_MRK_INST_TIMEO",115115 "BriefDescription": "Marked instruction finish timeout (instruction was lost)."116116 },117117 {118118- "EventCode": "4003C",119119- "EventName": "PM_DISP_STALL_HELD_SYNC_CYC",120120- "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of a synchronizing instruction that requires the ICT to be empty before dispatch."121121- },122122- {123123- "EventCode": "4505A",118118+ "EventCode": "0x4505A",124119 "EventName": "PM_SP_FLOP_CMPL",125120 "BriefDescription": "Single Precision floating point instructions completed."126121 },127122 {128128- "EventCode": "4D058",123123+ "EventCode": "0x4D058",129124 "EventName": "PM_VECTOR_FLOP_CMPL",130125 "BriefDescription": "Vector floating point instructions completed."131126 },132127 {133133- "EventCode": "4D05A",128128+ "EventCode": "0x4D05A",134129 "EventName": "PM_NON_MATH_FLOP_CMPL",135130 "BriefDescription": "Non Math instructions completed."136131 },137132 {138138- "EventCode": "401E0",133133+ "EventCode": "0x401E0",139134 "EventName": "PM_MRK_INST_CMPL",140135 "BriefDescription": "marked instruction completed."141136 },142137 {143143- "EventCode": "400FE",138138+ "EventCode": "0x400FE",144139 "EventName": "PM_DATA_FROM_MEMORY",145140 "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss."146141 }
···11[22 {33- "EventCode": "1000A",33+ "EventCode": "0x1000A",44 "EventName": "PM_PMC3_REWIND",55 "BriefDescription": "The speculative event selected for PMC3 rewinds and the counter for PMC3 is not charged."66 },77 {88- "EventCode": "1C040",88+ "EventCode": "0x1C040",99 "EventName": "PM_XFER_FROM_SRC_PMC1",1010 "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[0:12]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."1111 },1212 {1313- "EventCode": "1C142",1313+ "EventCode": "0x1C142",1414 "EventName": "PM_MRK_XFER_FROM_SRC_PMC1",1515 "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[0:12]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."1616 },1717 {1818- "EventCode": "1C144",1818+ "EventCode": "0x1C144",1919 "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC1",2020 "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[0:12]."2121 },2222 {2323- "EventCode": "1C056",2323+ "EventCode": "0x1C056",2424 "EventName": "PM_DERAT_MISS_4K",2525 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 4K. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."2626 },2727 {2828- "EventCode": "1C058",2828+ "EventCode": "0x1C058",2929 "EventName": "PM_DTLB_MISS_16G",3030 "BriefDescription": "Data TLB reload (after a miss) page size 16G. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."3131 },3232 {3333- "EventCode": "1C05C",3333+ "EventCode": "0x1C05C",3434 "EventName": "PM_DTLB_MISS_2M",3535 "BriefDescription": "Data TLB reload (after a miss) page size 2M. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."3636 },3737 {3838- "EventCode": "1E056",3838+ "EventCode": "0x1E056",3939 "EventName": "PM_EXEC_STALL_STORE_PIPE",4040 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the store unit. This does not include cycles spent handling store misses, PTESYNC instructions or TLBIE instructions."4141 },4242 {4343- "EventCode": "1F150",4343+ "EventCode": "0x1F150",4444 "EventName": "PM_MRK_ST_L2_CYC",4545 "BriefDescription": "Cycles from L2 RC dispatch to L2 RC completion."4646 },4747 {4848- "EventCode": "10062",4848+ "EventCode": "0x10062",4949 "EventName": "PM_LD_L3MISS_PEND_CYC",5050 "BriefDescription": "Cycles L3 miss was pending for this thread."5151 },5252 {5353- "EventCode": "20010",5353+ "EventCode": "0x20010",5454 "EventName": "PM_PMC1_OVERFLOW",5555 "BriefDescription": "The event selected for PMC1 caused the event counter to overflow."5656 },5757 {5858- "EventCode": "2001A",5858+ "EventCode": "0x2001A",5959 "EventName": "PM_ITLB_HIT",6060 "BriefDescription": "The PTE required to translate the instruction address was resident in the TLB (instruction TLB access/IERAT reload). Applies to both HPT and RPT. When MMCR1[17]=0 this event counts only for demand misses. When MMCR1[17]=1 this event includes demand misses and prefetches."6161 },6262 {6363- "EventCode": "2003E",6363+ "EventCode": "0x2003E",6464 "EventName": "PM_PTESYNC_FIN",6565 "BriefDescription": "Ptesync instruction finished in the store unit. Only one ptesync can finish at a time."6666 },6767 {6868- "EventCode": "2C040",6868+ "EventCode": "0x2C040",6969 "EventName": "PM_XFER_FROM_SRC_PMC2",7070 "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[15:27]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."7171 },7272 {7373- "EventCode": "2C054",7373+ "EventCode": "0x2C054",7474 "EventName": "PM_DERAT_MISS_64K",7575 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."7676 },7777 {7878- "EventCode": "2C056",7878+ "EventCode": "0x2C056",7979 "EventName": "PM_DTLB_MISS_4K",8080 "BriefDescription": "Data TLB reload (after a miss) page size 4K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."8181 },8282 {8383- "EventCode": "2D154",8383+ "EventCode": "0x2D154",8484 "EventName": "PM_MRK_DERAT_MISS_64K",8585 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."8686 },8787 {8888- "EventCode": "200F6",8888+ "EventCode": "0x200F6",8989 "EventName": "PM_DERAT_MISS",9090 "BriefDescription": "DERAT Reloaded to satisfy a DERAT miss. All page sizes are counted by this event. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."9191 },9292 {9393- "EventCode": "3000A",9494- "EventName": "PM_DISP_STALL_ITLB_MISS",9595- "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction TLB miss."9696- },9797- {9898- "EventCode": "30016",9393+ "EventCode": "0x30016",9994 "EventName": "PM_EXEC_STALL_DERAT_DTLB_MISS",10095 "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss and waited for it resolve."10196 },10297 {103103- "EventCode": "3C040",9898+ "EventCode": "0x3C040",10499 "EventName": "PM_XFER_FROM_SRC_PMC3",105100 "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."106101 },107102 {108108- "EventCode": "3C142",103103+ "EventCode": "0x3C142",109104 "EventName": "PM_MRK_XFER_FROM_SRC_PMC3",110105 "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."111106 },112107 {113113- "EventCode": "3C144",108108+ "EventCode": "0x3C144",114109 "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC3",115110 "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[30:42]."116111 },117112 {118118- "EventCode": "3C054",113113+ "EventCode": "0x3C054",119114 "EventName": "PM_DERAT_MISS_16M",120115 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16M. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."121116 },122117 {123123- "EventCode": "3C056",118118+ "EventCode": "0x3C056",124119 "EventName": "PM_DTLB_MISS_64K",125120 "BriefDescription": "Data TLB reload (after a miss) page size 64K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."126121 },127122 {128128- "EventCode": "3C058",123123+ "EventCode": "0x3C058",129124 "EventName": "PM_LARX_FIN",130125 "BriefDescription": "Load and reserve instruction (LARX) finished. LARX and STCX are instructions used to acquire a lock."131126 },132127 {133133- "EventCode": "301E2",128128+ "EventCode": "0x301E2",134129 "EventName": "PM_MRK_ST_CMPL",135130 "BriefDescription": "Marked store completed and sent to nest. Note that this count excludes cache-inhibited stores."136131 },137132 {138138- "EventCode": "300FC",133133+ "EventCode": "0x300FC",139134 "EventName": "PM_DTLB_MISS",140135 "BriefDescription": "The DPTEG required for the load/store instruction in execution was missing from the TLB. It includes pages of all sizes for demand and prefetch activity."141136 },142137 {143143- "EventCode": "4D02C",138138+ "EventCode": "0x4D02C",144139 "EventName": "PM_PMC1_REWIND",145140 "BriefDescription": "The speculative event selected for PMC1 rewinds and the counter for PMC1 is not charged."146141 },147142 {148148- "EventCode": "4003E",143143+ "EventCode": "0x4003E",149144 "EventName": "PM_LD_CMPL",150145 "BriefDescription": "Loads completed."151146 },152147 {153153- "EventCode": "4C040",148148+ "EventCode": "0x4C040",154149 "EventName": "PM_XFER_FROM_SRC_PMC4",155150 "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[45:57]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."156151 },157152 {158158- "EventCode": "4C142",153153+ "EventCode": "0x4C142",159154 "EventName": "PM_MRK_XFER_FROM_SRC_PMC4",160155 "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[45:57]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."161156 },162157 {163163- "EventCode": "4C144",158158+ "EventCode": "0x4C144",164159 "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC4",165160 "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[45:57]."166161 },167162 {168168- "EventCode": "4C056",163163+ "EventCode": "0x4C056",169164 "EventName": "PM_DTLB_MISS_16M",170165 "BriefDescription": "Data TLB reload (after a miss) page size 16M. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."171166 },172167 {173173- "EventCode": "4C05A",168168+ "EventCode": "0x4C05A",174169 "EventName": "PM_DTLB_MISS_1G",175170 "BriefDescription": "Data TLB reload (after a miss) page size 1G. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."176171 },177172 {178178- "EventCode": "4C15E",173173+ "EventCode": "0x4C15E",179174 "EventName": "PM_MRK_DTLB_MISS_64K",180175 "BriefDescription": "Marked Data TLB reload (after a miss) page size 64K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."181176 },182177 {183183- "EventCode": "4D056",178178+ "EventCode": "0x4D056",184179 "EventName": "PM_NON_FMA_FLOP_CMPL",185180 "BriefDescription": "Non FMA instruction completed."186181 },187182 {188188- "EventCode": "40164",183183+ "EventCode": "0x40164",189184 "EventName": "PM_MRK_DERAT_MISS_2M",190185 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 2M for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."191186 }
···11[22 {33- "EventCode": "10016",33+ "EventCode": "0x10016",44 "EventName": "PM_VSU0_ISSUE",55 "BriefDescription": "VSU instructions issued to VSU pipe 0."66 },77 {88- "EventCode": "1001C",88+ "EventCode": "0x1001C",99 "EventName": "PM_ULTRAVISOR_INST_CMPL",1010 "BriefDescription": "PowerPC instructions that completed while the thread was in ultravisor state."1111 },1212 {1313- "EventCode": "100F0",1313+ "EventCode": "0x100F0",1414 "EventName": "PM_CYC",1515 "BriefDescription": "Processor cycles."1616 },1717 {1818- "EventCode": "10134",1818+ "EventCode": "0x10134",1919 "EventName": "PM_MRK_ST_DONE_L2",2020 "BriefDescription": "Marked stores completed in L2 (RC machine done)."2121 },2222 {2323- "EventCode": "1505E",2323+ "EventCode": "0x1505E",2424 "EventName": "PM_LD_HIT_L1",2525 "BriefDescription": "Loads that finished without experiencing an L1 miss."2626 },2727 {2828- "EventCode": "1D05E",2929- "EventName": "PM_DISP_STALL_HELD_HALT_CYC",3030- "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of power management."3131- },3232- {3333- "EventCode": "1E054",3434- "EventName": "PM_EXEC_STALL_DMISS_L21_L31",3535- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from another core's L2 or L3 on the same chip."3636- },3737- {3838- "EventCode": "1E05A",3939- "EventName": "PM_CMPL_STALL_LWSYNC",4040- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a lwsync waiting to complete."4141- },4242- {4343- "EventCode": "1F056",2828+ "EventCode": "0x1F056",4429 "EventName": "PM_DISP_SS0_2_INSTR_CYC",4530 "BriefDescription": "Cycles in which Superslice 0 dispatches either 1 or 2 instructions."4631 },4732 {4848- "EventCode": "1F15C",3333+ "EventCode": "0x1F15C",4934 "EventName": "PM_MRK_STCX_L2_CYC",5035 "BriefDescription": "Cycles spent in the nest portion of a marked Stcx instruction. It starts counting when the operation starts to drain to the L2 and it stops counting when the instruction retires from the Instruction Completion Table (ICT) in the Instruction Sequencing Unit (ISU)."5136 },5237 {5353- "EventCode": "10066",3838+ "EventCode": "0x10066",5439 "EventName": "PM_ADJUNCT_CYC",5540 "BriefDescription": "Cycles in which the thread is in Adjunct state. MSR[S HV PR] bits = 011."5641 },5742 {5858- "EventCode": "101E4",4343+ "EventCode": "0x101E4",5944 "EventName": "PM_MRK_L1_ICACHE_MISS",6045 "BriefDescription": "Marked Instruction suffered an icache Miss."6146 },6247 {6363- "EventCode": "101EA",4848+ "EventCode": "0x101EA",6449 "EventName": "PM_MRK_L1_RELOAD_VALID",6550 "BriefDescription": "Marked demand reload."6651 },6752 {6868- "EventCode": "100F4",5353+ "EventCode": "0x100F4",6954 "EventName": "PM_FLOP_CMPL",7055 "BriefDescription": "Floating Point Operations Completed. Includes any type. It counts once for each 1, 2, 4 or 8 flop instruction. Use PM_1|2|4|8_FLOP_CMPL events to count flops."7156 },7257 {7373- "EventCode": "100FA",5858+ "EventCode": "0x100FA",7459 "EventName": "PM_RUN_LATCH_ANY_THREAD_CYC",7560 "BriefDescription": "Cycles when at least one thread has the run latch set."7661 },7762 {7878- "EventCode": "100FC",6363+ "EventCode": "0x100FC",7964 "EventName": "PM_LD_REF_L1",8065 "BriefDescription": "All L1 D cache load references counted at finish, gated by reject. In P9 and earlier this event counted only cacheable loads but in P10 both cacheable and non-cacheable loads are included."8166 },8267 {8383- "EventCode": "20006",8484- "EventName": "PM_DISP_STALL_HELD_ISSQ_FULL_CYC",8585- "BriefDescription": "Cycles in which the NTC instruction is held at dispatch due to Issue queue full. Includes issue queue and branch queue."8686- },8787- {8888- "EventCode": "2000C",6868+ "EventCode": "0x2000C",8969 "EventName": "PM_RUN_LATCH_ALL_THREADS_CYC",9070 "BriefDescription": "Cycles when the run latch is set for all threads."9171 },9272 {9393- "EventCode": "2E010",7373+ "EventCode": "0x2E010",9474 "EventName": "PM_ADJUNCT_INST_CMPL",9575 "BriefDescription": "PowerPC instructions that completed while the thread is in Adjunct state."9676 },9777 {9898- "EventCode": "2E014",7878+ "EventCode": "0x2E014",9979 "EventName": "PM_STCX_FIN",10080 "BriefDescription": "Conditional store instruction (STCX) finished. LARX and STCX are instructions used to acquire a lock."10181 },10282 {103103- "EventCode": "20130",8383+ "EventCode": "0x20130",10484 "EventName": "PM_MRK_INST_DECODED",10585 "BriefDescription": "An instruction was marked at decode time. Random Instruction Sampling (RIS) only."10686 },10787 {108108- "EventCode": "20132",8888+ "EventCode": "0x20132",10989 "EventName": "PM_MRK_DFU_ISSUE",11090 "BriefDescription": "The marked instruction was a decimal floating point operation issued to the VSU. Measured at issue time."11191 },11292 {113113- "EventCode": "20134",9393+ "EventCode": "0x20134",11494 "EventName": "PM_MRK_FXU_ISSUE",11595 "BriefDescription": "The marked instruction was a fixed point operation issued to the VSU. Measured at issue time."11696 },11797 {118118- "EventCode": "2505C",9898+ "EventCode": "0x2505C",11999 "EventName": "PM_VSU_ISSUE",120100 "BriefDescription": "At least one VSU instruction was issued to one of the VSU pipes. Up to 4 per cycle. Includes fixed point operations."121101 },122102 {123123- "EventCode": "2F054",103103+ "EventCode": "0x2F054",124104 "EventName": "PM_DISP_SS1_2_INSTR_CYC",125105 "BriefDescription": "Cycles in which Superslice 1 dispatches either 1 or 2 instructions."126106 },127107 {128128- "EventCode": "2F056",108108+ "EventCode": "0x2F056",129109 "EventName": "PM_DISP_SS1_4_INSTR_CYC",130110 "BriefDescription": "Cycles in which Superslice 1 dispatches either 3 or 4 instructions."131111 },132112 {133133- "EventCode": "2006C",113113+ "EventCode": "0x2006C",134114 "EventName": "PM_RUN_CYC_SMT4_MODE",135115 "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT4 mode."136116 },137117 {138138- "EventCode": "201E0",118118+ "EventCode": "0x201E0",139119 "EventName": "PM_MRK_DATA_FROM_MEMORY",140120 "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss for a marked load."141121 },142122 {143143- "EventCode": "201E4",123123+ "EventCode": "0x201E4",144124 "EventName": "PM_MRK_DATA_FROM_L3MISS",145125 "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss for a marked load."146126 },147127 {148148- "EventCode": "201E8",128128+ "EventCode": "0x201E8",149129 "EventName": "PM_THRESH_EXC_512",150130 "BriefDescription": "Threshold counter exceeded a value of 512."151131 },152132 {153153- "EventCode": "200F2",133133+ "EventCode": "0x200F2",154134 "EventName": "PM_INST_DISP",155135 "BriefDescription": "PowerPC instructions dispatched."156136 },157137 {158158- "EventCode": "30132",138138+ "EventCode": "0x30132",159139 "EventName": "PM_MRK_VSU_FIN",160140 "BriefDescription": "VSU marked instructions finished. Excludes simple FX instructions issued to the Store Unit."161141 },162142 {163163- "EventCode": "30038",143143+ "EventCode": "0x30038",164144 "EventName": "PM_EXEC_STALL_DMISS_LMEM",165145 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local memory, local OpenCapp cache, or local OpenCapp memory."166146 },167147 {168168- "EventCode": "3F04A",148148+ "EventCode": "0x3F04A",169149 "EventName": "PM_LSU_ST5_FIN",170150 "BriefDescription": "LSU Finished an internal operation in ST2 port."171151 },172152 {173173- "EventCode": "34054",174174- "EventName": "PM_EXEC_STALL_DMISS_L2L3_NOCONFLICT",175175- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, without a dispatch conflict."176176- },177177- {178178- "EventCode": "3405A",153153+ "EventCode": "0x3405A",179154 "EventName": "PM_PRIVILEGED_INST_CMPL",180155 "BriefDescription": "PowerPC Instructions that completed while the thread is in Privileged state."181156 },182157 {183183- "EventCode": "3F150",158158+ "EventCode": "0x3F150",184159 "EventName": "PM_MRK_ST_DRAIN_CYC",185160 "BriefDescription": "cycles to drain st from core to L2."186161 },187162 {188188- "EventCode": "3F054",163163+ "EventCode": "0x3F054",189164 "EventName": "PM_DISP_SS0_4_INSTR_CYC",190165 "BriefDescription": "Cycles in which Superslice 0 dispatches either 3 or 4 instructions."191166 },192167 {193193- "EventCode": "3F056",168168+ "EventCode": "0x3F056",194169 "EventName": "PM_DISP_SS0_8_INSTR_CYC",195170 "BriefDescription": "Cycles in which Superslice 0 dispatches either 5, 6, 7 or 8 instructions."196171 },197172 {198198- "EventCode": "30162",173173+ "EventCode": "0x30162",199174 "EventName": "PM_MRK_ISSUE_DEPENDENT_LOAD",200175 "BriefDescription": "The marked instruction was dependent on a load. It is eligible for issue kill."201176 },202177 {203203- "EventCode": "40114",178178+ "EventCode": "0x40114",204179 "EventName": "PM_MRK_START_PROBE_NOP_DISP",205180 "BriefDescription": "Marked Start probe nop dispatched. Instruction AND R0,R0,R0."206181 },207182 {208208- "EventCode": "4001C",183183+ "EventCode": "0x4001C",209184 "EventName": "PM_VSU_FIN",210185 "BriefDescription": "VSU instructions finished."211186 },212187 {213213- "EventCode": "4C01A",188188+ "EventCode": "0x4C01A",214189 "EventName": "PM_EXEC_STALL_DMISS_OFF_NODE",215190 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a distant chip."216191 },217192 {218218- "EventCode": "4D012",193193+ "EventCode": "0x4D012",219194 "EventName": "PM_PMC3_SAVED",220195 "BriefDescription": "The conditions for the speculative event selected for PMC3 are met and PMC3 is charged."221196 },222197 {223223- "EventCode": "4D022",198198+ "EventCode": "0x4D022",224199 "EventName": "PM_HYPERVISOR_INST_CMPL",225200 "BriefDescription": "PowerPC instructions that completed while the thread is in hypervisor state."226201 },227202 {228228- "EventCode": "4D026",203203+ "EventCode": "0x4D026",229204 "EventName": "PM_ULTRAVISOR_CYC",230205 "BriefDescription": "Cycles when the thread is in Ultravisor state. MSR[S HV PR]=110."231206 },232207 {233233- "EventCode": "4D028",208208+ "EventCode": "0x4D028",234209 "EventName": "PM_PRIVILEGED_CYC",235210 "BriefDescription": "Cycles when the thread is in Privileged state. MSR[S HV PR]=x00."236211 },237212 {238238- "EventCode": "40030",213213+ "EventCode": "0x40030",239214 "EventName": "PM_INST_FIN",240215 "BriefDescription": "Instructions finished."241216 },242217 {243243- "EventCode": "44146",218218+ "EventCode": "0x44146",244219 "EventName": "PM_MRK_STCX_CORE_CYC",245220 "BriefDescription": "Cycles spent in the core portion of a marked Stcx instruction. It starts counting when the instruction is decoded and stops counting when it drains into the L2."246221 },247222 {248248- "EventCode": "44054",223223+ "EventCode": "0x44054",249224 "EventName": "PM_VECTOR_LD_CMPL",250225 "BriefDescription": "Vector load instructions completed."251226 },252227 {253253- "EventCode": "45054",228228+ "EventCode": "0x45054",254229 "EventName": "PM_FMA_CMPL",255230 "BriefDescription": "Two floating point instructions completed (FMA class of instructions: fmadd, fnmadd, fmsub, fnmsub). Scalar instructions only."256231 },257232 {258258- "EventCode": "45056",233233+ "EventCode": "0x45056",259234 "EventName": "PM_SCALAR_FLOP_CMPL",260235 "BriefDescription": "Scalar floating point instructions completed."261236 },262237 {263263- "EventCode": "4505C",238238+ "EventCode": "0x4505C",264239 "EventName": "PM_MATH_FLOP_CMPL",265240 "BriefDescription": "Math floating point instructions completed."266241 },267242 {268268- "EventCode": "4D05E",243243+ "EventCode": "0x4D05E",269244 "EventName": "PM_BR_CMPL",270245 "BriefDescription": "A branch completed. All branches are included."271246 },272247 {273273- "EventCode": "4E15E",248248+ "EventCode": "0x4E15E",274249 "EventName": "PM_MRK_INST_FLUSHED",275250 "BriefDescription": "The marked instruction was flushed."276251 },277252 {278278- "EventCode": "401E6",253253+ "EventCode": "0x401E6",279254 "EventName": "PM_MRK_INST_FROM_L3MISS",280255 "BriefDescription": "The processor's instruction cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss for a marked instruction."281256 },282257 {283283- "EventCode": "401E8",258258+ "EventCode": "0x401E8",284259 "EventName": "PM_MRK_DATA_FROM_L2MISS",285260 "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1 or L2 due to a demand miss for a marked load."286261 },287262 {288288- "EventCode": "400F0",263263+ "EventCode": "0x400F0",289264 "EventName": "PM_LD_DEMAND_MISS_L1_FIN",290265 "BriefDescription": "Load Missed L1, counted at finish time."291266 },292267 {293293- "EventCode": "400FA",268268+ "EventCode": "0x400FA",294269 "EventName": "PM_RUN_INST_CMPL",295270 "BriefDescription": "Completed PowerPC instructions gated by the run latch."296271 }
···11[22 {33- "EventCode": "100FE",33+ "EventCode": "0x100FE",44 "EventName": "PM_INST_CMPL",55 "BriefDescription": "PowerPC instructions completed."66 },77 {88- "EventCode": "10006",99- "EventName": "PM_DISP_STALL_HELD_OTHER_CYC",1010- "BriefDescription": "Cycles in which the NTC instruction is held at dispatch for any other reason."1111- },1212- {1313- "EventCode": "1000C",88+ "EventCode": "0x1000C",149 "EventName": "PM_LSU_LD0_FIN",1510 "BriefDescription": "LSU Finished an internal operation in LD0 port."1611 },1712 {1818- "EventCode": "1000E",1313+ "EventCode": "0x1000E",1914 "EventName": "PM_MMA_ISSUED",2015 "BriefDescription": "MMA instructions issued."2116 },2217 {2323- "EventCode": "10012",1818+ "EventCode": "0x10012",2419 "EventName": "PM_LSU_ST0_FIN",2520 "BriefDescription": "LSU Finished an internal operation in ST0 port."2621 },2722 {2828- "EventCode": "10014",2323+ "EventCode": "0x10014",2924 "EventName": "PM_LSU_ST4_FIN",3025 "BriefDescription": "LSU Finished an internal operation in ST4 port."3126 },3227 {3333- "EventCode": "10018",2828+ "EventCode": "0x10018",3429 "EventName": "PM_IC_DEMAND_CYC",3530 "BriefDescription": "Cycles in which an instruction reload is pending to satisfy a demand miss."3631 },3732 {3838- "EventCode": "10022",3333+ "EventCode": "0x10022",3934 "EventName": "PM_PMC2_SAVED",4035 "BriefDescription": "The conditions for the speculative event selected for PMC2 are met and PMC2 is charged."4136 },4237 {4343- "EventCode": "10024",3838+ "EventCode": "0x10024",4439 "EventName": "PM_PMC5_OVERFLOW",4540 "BriefDescription": "The event selected for PMC5 caused the event counter to overflow."4641 },4742 {4848- "EventCode": "10058",4343+ "EventCode": "0x10058",4944 "EventName": "PM_EXEC_STALL_FIN_AT_DISP",5045 "BriefDescription": "Cycles in which the oldest instruction in the pipeline finished at dispatch and did not require execution in the LSU, BRU or VSU."5146 },5247 {5353- "EventCode": "1005A",4848+ "EventCode": "0x1005A",5449 "EventName": "PM_FLUSH_MPRED",5550 "BriefDescription": "A flush occurred due to a mispredicted branch. Includes target and direction."5651 },5752 {5858- "EventCode": "1C05A",5353+ "EventCode": "0x1C05A",5954 "EventName": "PM_DERAT_MISS_2M",6055 "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 2M. Implies radix translation. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."6156 },6257 {6363- "EventCode": "10064",6464- "EventName": "PM_DISP_STALL_IC_L2",6565- "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2."5858+ "EventCode": "0x1E05A",5959+ "EventName": "PM_CMPL_STALL_LWSYNC",6060+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a lwsync waiting to complete."6661 },6762 {6868- "EventCode": "10068",6363+ "EventCode": "0x10068",6964 "EventName": "PM_BR_FIN",7065 "BriefDescription": "A branch instruction finished. Includes predicted/mispredicted/unconditional."7166 },7267 {7373- "EventCode": "1006A",6868+ "EventCode": "0x1006A",7469 "EventName": "PM_FX_LSU_FIN",7570 "BriefDescription": "Simple fixed point instruction issued to the store unit. Measured at finish time."7671 },7772 {7878- "EventCode": "1006C",7373+ "EventCode": "0x1006C",7974 "EventName": "PM_RUN_CYC_ST_MODE",8075 "BriefDescription": "Cycles when the run latch is set and the core is in ST mode."8176 },8277 {8383- "EventCode": "20004",7878+ "EventCode": "0x20004",8479 "EventName": "PM_ISSUE_STALL",8580 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was dispatched but not issued yet."8681 },8782 {8888- "EventCode": "2000A",8383+ "EventCode": "0x2000A",8984 "EventName": "PM_HYPERVISOR_CYC",9085 "BriefDescription": "Cycles when the thread is in Hypervisor state. MSR[S HV PR]=010."9186 },9287 {9393- "EventCode": "2000E",8888+ "EventCode": "0x2000E",9489 "EventName": "PM_LSU_LD1_FIN",9590 "BriefDescription": "LSU Finished an internal operation in LD1 port."9691 },9792 {9898- "EventCode": "2C014",9393+ "EventCode": "0x2C014",9994 "EventName": "PM_CMPL_STALL_SPECIAL",10095 "BriefDescription": "Cycles in which the oldest instruction in the pipeline required special handling before completing."10196 },10297 {103103- "EventCode": "2C018",9898+ "EventCode": "0x2C018",10499 "EventName": "PM_EXEC_STALL_DMISS_L3MISS",105100 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a source beyond the local L2 or local L3."106101 },107102 {108108- "EventCode": "2D010",103103+ "EventCode": "0x2D010",109104 "EventName": "PM_LSU_ST1_FIN",110105 "BriefDescription": "LSU Finished an internal operation in ST1 port."111106 },112107 {113113- "EventCode": "2D012",108108+ "EventCode": "0x2D012",114109 "EventName": "PM_VSU1_ISSUE",115110 "BriefDescription": "VSU instructions issued to VSU pipe 1."116111 },117112 {118118- "EventCode": "2D018",113113+ "EventCode": "0x2D018",119114 "EventName": "PM_EXEC_STALL_VSU",120115 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the VSU (includes FXU, VSU, CRU)."121116 },122117 {123123- "EventCode": "2E01E",124124- "EventName": "PM_EXEC_STALL_NTC_FLUSH",125125- "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in any unit before it was flushed. Note that if the flush of the oldest instruction happens after finish, the cycles from dispatch to issue will be included in PM_DISP_STALL and the cycles from issue to finish will be included in PM_EXEC_STALL and its corresponding children."118118+ "EventCode": "0x2D01C",119119+ "EventName": "PM_CMPL_STALL_STCX",120120+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a stcx waiting for resolution from the nest before completing."126121 },127122 {128128- "EventCode": "2013C",123123+ "EventCode": "0x2E01E",124124+ "EventName": "PM_EXEC_STALL_NTC_FLUSH",125125+ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in any unit before it was flushed. Note that if the flush of the oldest instruction happens after finish, the cycles from dispatch to issue will be included in PM_DISP_STALL and the cycles from issue to finish will be included in PM_EXEC_STALL and its corresponding children. This event will also count cycles when the previous NTF instruction is still completing and the new NTF instruction is stalled at dispatch."126126+ },127127+ {128128+ "EventCode": "0x2013C",129129 "EventName": "PM_MRK_FX_LSU_FIN",130130 "BriefDescription": "The marked instruction was simple fixed point that was issued to the store unit. Measured at finish time."131131 },132132 {133133- "EventCode": "2405A",133133+ "EventCode": "0x2405A",134134 "EventName": "PM_NTC_FIN",135135 "BriefDescription": "Cycles in which the oldest instruction in the pipeline (NTC) finishes. Note that instructions can finish out of order, therefore not all the instructions that finish have a Next-to-complete status."136136 },137137 {138138- "EventCode": "201E2",138138+ "EventCode": "0x201E2",139139 "EventName": "PM_MRK_LD_MISS_L1",140140 "BriefDescription": "Marked DL1 Demand Miss counted at finish time."141141 },142142 {143143- "EventCode": "200F4",143143+ "EventCode": "0x200F4",144144 "EventName": "PM_RUN_CYC",145145 "BriefDescription": "Processor cycles gated by the run latch."146146 },147147 {148148- "EventCode": "30004",149149- "EventName": "PM_DISP_STALL_FLUSH",150150- "BriefDescription": "Cycles when dispatch was stalled because of a flush that happened to an instruction(s) that was not yet NTC. PM_EXEC_STALL_NTC_FLUSH only includes instructions that were flushed after becoming NTC."151151- },152152- {153153- "EventCode": "30008",148148+ "EventCode": "0x30008",154149 "EventName": "PM_EXEC_STALL",155150 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting to finish in one of the execution units (BRU, LSU, VSU). Only cycles between issue and finish are counted in this category."156151 },157152 {158158- "EventCode": "3001A",153153+ "EventCode": "0x3001A",159154 "EventName": "PM_LSU_ST2_FIN",160155 "BriefDescription": "LSU Finished an internal operation in ST2 port."161156 },162157 {163163- "EventCode": "30020",158158+ "EventCode": "0x30020",164159 "EventName": "PM_PMC2_REWIND",165160 "BriefDescription": "The speculative event selected for PMC2 rewinds and the counter for PMC2 is not charged."166161 },167162 {168168- "EventCode": "30022",163163+ "EventCode": "0x30022",169164 "EventName": "PM_PMC4_SAVED",170165 "BriefDescription": "The conditions for the speculative event selected for PMC4 are met and PMC4 is charged."171166 },172167 {173173- "EventCode": "30024",168168+ "EventCode": "0x30024",174169 "EventName": "PM_PMC6_OVERFLOW",175170 "BriefDescription": "The event selected for PMC6 caused the event counter to overflow."176171 },177172 {178178- "EventCode": "30028",173173+ "EventCode": "0x30028",179174 "EventName": "PM_CMPL_STALL_MEM_ECC",180175 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for the non-speculative finish of either a stcx waiting for its result or a load waiting for non-critical sectors of data and ECC."181176 },182177 {183183- "EventCode": "30036",178178+ "EventCode": "0x30036",184179 "EventName": "PM_EXEC_STALL_SIMPLE_FX",185180 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a simple fixed point instruction executing in the Load Store Unit."186181 },187182 {188188- "EventCode": "3003A",183183+ "EventCode": "0x3003A",189184 "EventName": "PM_CMPL_STALL_EXCEPTION",190185 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was not allowed to complete because it was interrupted by ANY exception, which has to be serviced before the instruction can complete."191186 },192187 {193193- "EventCode": "3F044",188188+ "EventCode": "0x3F044",194189 "EventName": "PM_VSU2_ISSUE",195190 "BriefDescription": "VSU instructions issued to VSU pipe 2."196191 },197192 {198198- "EventCode": "30058",193193+ "EventCode": "0x30058",199194 "EventName": "PM_TLBIE_FIN",200195 "BriefDescription": "TLBIE instructions finished in the LSU. Two TLBIEs can finish each cycle. All will be counted."201196 },202197 {203203- "EventCode": "3D058",198198+ "EventCode": "0x3D058",204199 "EventName": "PM_SCALAR_FSQRT_FDIV_ISSUE",205200 "BriefDescription": "Scalar versions of four floating point operations: fdiv,fsqrt (xvdivdp, xvdivsp, xvsqrtdp, xvsqrtsp)."206201 },207202 {208208- "EventCode": "30066",203203+ "EventCode": "0x30066",209204 "EventName": "PM_LSU_FIN",210205 "BriefDescription": "LSU Finished an internal operation (up to 4 per cycle)."211206 },212207 {213213- "EventCode": "40004",208208+ "EventCode": "0x40004",214209 "EventName": "PM_FXU_ISSUE",215210 "BriefDescription": "A fixed point instruction was issued to the VSU."216211 },217212 {218218- "EventCode": "40008",213213+ "EventCode": "0x40008",219214 "EventName": "PM_NTC_ALL_FIN",220215 "BriefDescription": "Cycles in which both instructions in the ICT entry pair show as finished. These are the cycles between finish and completion for the oldest pair of instructions in the pipeline."221216 },222217 {223223- "EventCode": "40010",218218+ "EventCode": "0x40010",224219 "EventName": "PM_PMC3_OVERFLOW",225220 "BriefDescription": "The event selected for PMC3 caused the event counter to overflow."226221 },227222 {228228- "EventCode": "4C012",223223+ "EventCode": "0x4C012",229224 "EventName": "PM_EXEC_STALL_DERAT_ONLY_MISS",230225 "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered an ERAT miss and waited for it resolve."231226 },232227 {233233- "EventCode": "4C018",228228+ "EventCode": "0x4C018",234229 "EventName": "PM_CMPL_STALL",235230 "BriefDescription": "Cycles in which the oldest instruction in the pipeline cannot complete because the thread was blocked for any reason."236231 },237232 {238238- "EventCode": "4C01E",233233+ "EventCode": "0x4C01E",239234 "EventName": "PM_LSU_ST3_FIN",240235 "BriefDescription": "LSU Finished an internal operation in ST3 port."241236 },242237 {243243- "EventCode": "4D018",238238+ "EventCode": "0x4D018",244239 "EventName": "PM_EXEC_STALL_BRU",245240 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the Branch unit."246241 },247242 {248248- "EventCode": "4D01A",243243+ "EventCode": "0x4D01A",249244 "EventName": "PM_CMPL_STALL_HWSYNC",250245 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a hwsync waiting for response from L2 before completing."251246 },252247 {253253- "EventCode": "4D01C",248248+ "EventCode": "0x4D01C",254249 "EventName": "PM_EXEC_STALL_TLBIEL",255250 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a TLBIEL instruction executing in the Load Store Unit. TLBIEL instructions have lower overhead than TLBIE instructions because they don't get set to the nest."256251 },257252 {258258- "EventCode": "4E012",253253+ "EventCode": "0x4E012",259254 "EventName": "PM_EXEC_STALL_UNKNOWN",260255 "BriefDescription": "Cycles in which the oldest instruction in the pipeline completed without an ntf_type pulse. The ntf_pulse was missed by the ISU because the NTF finishes and completions came too close together."261256 },262257 {263263- "EventCode": "4D020",258258+ "EventCode": "0x4D020",264259 "EventName": "PM_VSU3_ISSUE",265260 "BriefDescription": "VSU instruction was issued to VSU pipe 3."266261 },267262 {268268- "EventCode": "40132",263263+ "EventCode": "0x40132",269264 "EventName": "PM_MRK_LSU_FIN",270265 "BriefDescription": "LSU marked instruction finish."271266 },272267 {273273- "EventCode": "45058",268268+ "EventCode": "0x45058",274269 "EventName": "PM_IC_MISS_CMPL",275270 "BriefDescription": "Non-speculative icache miss, counted at completion."276271 },277272 {278278- "EventCode": "4D050",273273+ "EventCode": "0x4D050",279274 "EventName": "PM_VSU_NON_FLOP_CMPL",280275 "BriefDescription": "Non-floating point VSU instructions completed."281276 },282277 {283283- "EventCode": "4D052",278278+ "EventCode": "0x4D052",284279 "EventName": "PM_2FLOP_CMPL",285280 "BriefDescription": "Double Precision vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg completed."286281 },287282 {288288- "EventCode": "400F2",283283+ "EventCode": "0x400F2",289284 "EventName": "PM_1PLUS_PPC_DISP",290285 "BriefDescription": "Cycles at least one Instr Dispatched."291286 },292287 {293293- "EventCode": "400F8",288288+ "EventCode": "0x400F8",294289 "EventName": "PM_FLUSH",295290 "BriefDescription": "Flush (any type)."296291 }
···11[22 {33- "EventCode": "1F15E",33+ "EventCode": "0x1F15E",44 "EventName": "PM_MRK_START_PROBE_NOP_CMPL",55 "BriefDescription": "Marked Start probe nop (AND R0,R0,R0) completed."66 },77 {88- "EventCode": "20016",88+ "EventCode": "0x20016",99 "EventName": "PM_ST_FIN",1010 "BriefDescription": "Store finish count. Includes speculative activity."1111 },1212 {1313- "EventCode": "20018",1313+ "EventCode": "0x20018",1414 "EventName": "PM_ST_FWD",1515 "BriefDescription": "Store forwards that finished."1616 },1717 {1818- "EventCode": "2011C",1818+ "EventCode": "0x2011C",1919 "EventName": "PM_MRK_NTF_CYC",2020 "BriefDescription": "Cycles during which the marked instruction is the oldest in the pipeline (NTF or NTC)."2121 },2222 {2323- "EventCode": "2E01C",2323+ "EventCode": "0x2E01C",2424 "EventName": "PM_EXEC_STALL_TLBIE",2525 "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a TLBIE instruction executing in the Load Store Unit."2626 },2727 {2828- "EventCode": "201E6",2828+ "EventCode": "0x201E6",2929 "EventName": "PM_THRESH_EXC_32",3030 "BriefDescription": "Threshold counter exceeded a value of 32."3131 },3232 {3333- "EventCode": "200F0",3333+ "EventCode": "0x200F0",3434 "EventName": "PM_ST_CMPL",3535 "BriefDescription": "Stores completed from S2Q (2nd-level store queue). This event includes regular stores, stcx and cache inhibited stores. The following operations are excluded (pteupdate, snoop tlbie complete, store atomics, miso, load atomic payloads, tlbie, tlbsync, slbieg, isync, msgsnd, slbiag, cpabort, copy, tcheck, tend, stsync, dcbst, icbi, dcbf, hwsync, lwsync, ptesync, eieio, msgsync)."3636 },3737 {3838- "EventCode": "200FE",3838+ "EventCode": "0x200FE",3939 "EventName": "PM_DATA_FROM_L2MISS",4040 "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1 or L2 due to a demand miss."4141 },4242 {4343- "EventCode": "30010",4343+ "EventCode": "0x30010",4444 "EventName": "PM_PMC2_OVERFLOW",4545 "BriefDescription": "The event selected for PMC2 caused the event counter to overflow."4646 },4747 {4848- "EventCode": "4D010",4848+ "EventCode": "0x4D010",4949 "EventName": "PM_PMC1_SAVED",5050 "BriefDescription": "The conditions for the speculative event selected for PMC1 are met and PMC1 is charged."5151 },5252 {5353- "EventCode": "4D05C",5353+ "EventCode": "0x4D05C",5454 "EventName": "PM_DPP_FLOP_CMPL",5555 "BriefDescription": "Double-Precision or Quad-Precision instructions completed."5656 }
···7575 return 0;7676}77777878-SEC("chk_egress") int tc_chk(struct __sk_buff *skb)7878+SEC("classifier/chk_egress")7979+int tc_chk(struct __sk_buff *skb)7980{8081 void *data_end = ctx_ptr(skb->data_end);8182 void *data = ctx_ptr(skb->data);···143142/* these are identical, but keep them separate for compatibility with the144143 * section names expected by test_tc_redirect.sh145144 */146146-SEC("dst_ingress") int tc_dst(struct __sk_buff *skb)145145+SEC("classifier/dst_ingress")146146+int tc_dst(struct __sk_buff *skb)147147{148148 return tc_redir(skb);149149}150150151151-SEC("src_ingress") int tc_src(struct __sk_buff *skb)151151+SEC("classifier/src_ingress")152152+int tc_src(struct __sk_buff *skb)152153{153154 return tc_redir(skb);154155}
···11-#!/bin/bash22-# SPDX-License-Identifier: GPL-2.033-#44-# This test sets up 3 netns (src <-> fwd <-> dst). There is no direct veth link55-# between src and dst. The netns fwd has veth links to each src and dst. The66-# client is in src and server in dst. The test installs a TC BPF program to each77-# host facing veth in fwd which calls into i) bpf_redirect_neigh() to perform the88-# neigh addr population and redirect or ii) bpf_redirect_peer() for namespace99-# switch from ingress side; it also installs a checker prog on the egress side1010-# to drop unexpected traffic.1111-1212-if [[ $EUID -ne 0 ]]; then1313- echo "This script must be run as root"1414- echo "FAIL"1515- exit 11616-fi1717-1818-# check that needed tools are present1919-command -v nc >/dev/null 2>&1 || \2020- { echo >&2 "nc is not available"; exit 1; }2121-command -v dd >/dev/null 2>&1 || \2222- { echo >&2 "dd is not available"; exit 1; }2323-command -v timeout >/dev/null 2>&1 || \2424- { echo >&2 "timeout is not available"; exit 1; }2525-command -v ping >/dev/null 2>&1 || \2626- { echo >&2 "ping is not available"; exit 1; }2727-if command -v ping6 >/dev/null 2>&1; then PING6=ping6; else PING6=ping; fi2828-command -v perl >/dev/null 2>&1 || \2929- { echo >&2 "perl is not available"; exit 1; }3030-command -v jq >/dev/null 2>&1 || \3131- { echo >&2 "jq is not available"; exit 1; }3232-command -v bpftool >/dev/null 2>&1 || \3333- { echo >&2 "bpftool is not available"; exit 1; }3434-3535-readonly GREEN='\033[0;92m'3636-readonly RED='\033[0;31m'3737-readonly NC='\033[0m' # No Color3838-3939-readonly PING_ARG="-c 3 -w 10 -q"4040-4141-readonly TIMEOUT=104242-4343-readonly NS_SRC="ns-src-$(mktemp -u XXXXXX)"4444-readonly NS_FWD="ns-fwd-$(mktemp -u XXXXXX)"4545-readonly NS_DST="ns-dst-$(mktemp -u XXXXXX)"4646-4747-readonly IP4_SRC="172.16.1.100"4848-readonly IP4_DST="172.16.2.100"4949-5050-readonly IP6_SRC="::1:dead:beef:cafe"5151-readonly IP6_DST="::2:dead:beef:cafe"5252-5353-readonly IP4_SLL="169.254.0.1"5454-readonly IP4_DLL="169.254.0.2"5555-readonly IP4_NET="169.254.0.0"5656-5757-netns_cleanup()5858-{5959- ip netns del ${NS_SRC}6060- ip netns del ${NS_FWD}6161- ip netns del ${NS_DST}6262-}6363-6464-netns_setup()6565-{6666- ip netns add "${NS_SRC}"6767- ip netns add "${NS_FWD}"6868- ip netns add "${NS_DST}"6969-7070- ip link add veth_src type veth peer name veth_src_fwd7171- ip link add veth_dst type veth peer name veth_dst_fwd7272-7373- ip link set veth_src netns ${NS_SRC}7474- ip link set veth_src_fwd netns ${NS_FWD}7575-7676- ip link set veth_dst netns ${NS_DST}7777- ip link set veth_dst_fwd netns ${NS_FWD}7878-7979- ip -netns ${NS_SRC} addr add ${IP4_SRC}/32 dev veth_src8080- ip -netns ${NS_DST} addr add ${IP4_DST}/32 dev veth_dst8181-8282- # The fwd netns automatically get a v6 LL address / routes, but also8383- # needs v4 one in order to start ARP probing. IP4_NET route is added8484- # to the endpoints so that the ARP processing will reply.8585-8686- ip -netns ${NS_FWD} addr add ${IP4_SLL}/32 dev veth_src_fwd8787- ip -netns ${NS_FWD} addr add ${IP4_DLL}/32 dev veth_dst_fwd8888-8989- ip -netns ${NS_SRC} addr add ${IP6_SRC}/128 dev veth_src nodad9090- ip -netns ${NS_DST} addr add ${IP6_DST}/128 dev veth_dst nodad9191-9292- ip -netns ${NS_SRC} link set dev veth_src up9393- ip -netns ${NS_FWD} link set dev veth_src_fwd up9494-9595- ip -netns ${NS_DST} link set dev veth_dst up9696- ip -netns ${NS_FWD} link set dev veth_dst_fwd up9797-9898- ip -netns ${NS_SRC} route add ${IP4_DST}/32 dev veth_src scope global9999- ip -netns ${NS_SRC} route add ${IP4_NET}/16 dev veth_src scope global100100- ip -netns ${NS_FWD} route add ${IP4_SRC}/32 dev veth_src_fwd scope global101101-102102- ip -netns ${NS_SRC} route add ${IP6_DST}/128 dev veth_src scope global103103- ip -netns ${NS_FWD} route add ${IP6_SRC}/128 dev veth_src_fwd scope global104104-105105- ip -netns ${NS_DST} route add ${IP4_SRC}/32 dev veth_dst scope global106106- ip -netns ${NS_DST} route add ${IP4_NET}/16 dev veth_dst scope global107107- ip -netns ${NS_FWD} route add ${IP4_DST}/32 dev veth_dst_fwd scope global108108-109109- ip -netns ${NS_DST} route add ${IP6_SRC}/128 dev veth_dst scope global110110- ip -netns ${NS_FWD} route add ${IP6_DST}/128 dev veth_dst_fwd scope global111111-112112- fmac_src=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_src_fwd/address)113113- fmac_dst=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_dst_fwd/address)114114-115115- ip -netns ${NS_SRC} neigh add ${IP4_DST} dev veth_src lladdr $fmac_src116116- ip -netns ${NS_DST} neigh add ${IP4_SRC} dev veth_dst lladdr $fmac_dst117117-118118- ip -netns ${NS_SRC} neigh add ${IP6_DST} dev veth_src lladdr $fmac_src119119- ip -netns ${NS_DST} neigh add ${IP6_SRC} dev veth_dst lladdr $fmac_dst120120-}121121-122122-netns_test_connectivity()123123-{124124- set +e125125-126126- ip netns exec ${NS_DST} bash -c "nc -4 -l -p 9004 &"127127- ip netns exec ${NS_DST} bash -c "nc -6 -l -p 9006 &"128128-129129- TEST="TCPv4 connectivity test"130130- ip netns exec ${NS_SRC} bash -c "timeout ${TIMEOUT} dd if=/dev/zero bs=1000 count=100 > /dev/tcp/${IP4_DST}/9004"131131- if [ $? -ne 0 ]; then132132- echo -e "${TEST}: ${RED}FAIL${NC}"133133- exit 1134134- fi135135- echo -e "${TEST}: ${GREEN}PASS${NC}"136136-137137- TEST="TCPv6 connectivity test"138138- ip netns exec ${NS_SRC} bash -c "timeout ${TIMEOUT} dd if=/dev/zero bs=1000 count=100 > /dev/tcp/${IP6_DST}/9006"139139- if [ $? -ne 0 ]; then140140- echo -e "${TEST}: ${RED}FAIL${NC}"141141- exit 1142142- fi143143- echo -e "${TEST}: ${GREEN}PASS${NC}"144144-145145- TEST="ICMPv4 connectivity test"146146- ip netns exec ${NS_SRC} ping $PING_ARG ${IP4_DST}147147- if [ $? -ne 0 ]; then148148- echo -e "${TEST}: ${RED}FAIL${NC}"149149- exit 1150150- fi151151- echo -e "${TEST}: ${GREEN}PASS${NC}"152152-153153- TEST="ICMPv6 connectivity test"154154- ip netns exec ${NS_SRC} $PING6 $PING_ARG ${IP6_DST}155155- if [ $? -ne 0 ]; then156156- echo -e "${TEST}: ${RED}FAIL${NC}"157157- exit 1158158- fi159159- echo -e "${TEST}: ${GREEN}PASS${NC}"160160-161161- set -e162162-}163163-164164-hex_mem_str()165165-{166166- perl -e 'print join(" ", unpack("(H2)8", pack("L", @ARGV)))' $1167167-}168168-169169-netns_setup_bpf()170170-{171171- local obj=$1172172- local use_forwarding=${2:-0}173173-174174- ip netns exec ${NS_FWD} tc qdisc add dev veth_src_fwd clsact175175- ip netns exec ${NS_FWD} tc filter add dev veth_src_fwd ingress bpf da obj $obj sec src_ingress176176- ip netns exec ${NS_FWD} tc filter add dev veth_src_fwd egress bpf da obj $obj sec chk_egress177177-178178- ip netns exec ${NS_FWD} tc qdisc add dev veth_dst_fwd clsact179179- ip netns exec ${NS_FWD} tc filter add dev veth_dst_fwd ingress bpf da obj $obj sec dst_ingress180180- ip netns exec ${NS_FWD} tc filter add dev veth_dst_fwd egress bpf da obj $obj sec chk_egress181181-182182- if [ "$use_forwarding" -eq "1" ]; then183183- # bpf_fib_lookup() checks if forwarding is enabled184184- ip netns exec ${NS_FWD} sysctl -w net.ipv4.ip_forward=1185185- ip netns exec ${NS_FWD} sysctl -w net.ipv6.conf.veth_dst_fwd.forwarding=1186186- ip netns exec ${NS_FWD} sysctl -w net.ipv6.conf.veth_src_fwd.forwarding=1187187- return 0188188- fi189189-190190- veth_src=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_src_fwd/ifindex)191191- veth_dst=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_dst_fwd/ifindex)192192-193193- progs=$(ip netns exec ${NS_FWD} bpftool net --json | jq -r '.[] | .tc | map(.id) | .[]')194194- for prog in $progs; do195195- map=$(bpftool prog show id $prog --json | jq -r '.map_ids | .? | .[]')196196- if [ ! -z "$map" ]; then197197- bpftool map update id $map key hex $(hex_mem_str 0) value hex $(hex_mem_str $veth_src)198198- bpftool map update id $map key hex $(hex_mem_str 1) value hex $(hex_mem_str $veth_dst)199199- fi200200- done201201-}202202-203203-trap netns_cleanup EXIT204204-set -e205205-206206-netns_setup207207-netns_setup_bpf test_tc_neigh.o208208-netns_test_connectivity209209-netns_cleanup210210-netns_setup211211-netns_setup_bpf test_tc_neigh_fib.o 1212212-netns_test_connectivity213213-netns_cleanup214214-netns_setup215215-netns_setup_bpf test_tc_peer.o216216-netns_test_connectivity
···132132 TEST_ASSERT(false, "%s: [%d] child escaped the ninja\n", __func__, run);133133}134134135135+void wait_for_child_setup(pid_t pid)136136+{137137+ /*138138+ * Wait for the child to post to the semaphore, but wake up periodically139139+ * to check if the child exited prematurely.140140+ */141141+ for (;;) {142142+ const struct timespec wait_period = { .tv_sec = 1 };143143+ int status;144144+145145+ if (!sem_timedwait(sem, &wait_period))146146+ return;147147+148148+ /* Child is still running, keep waiting. */149149+ if (pid != waitpid(pid, &status, WNOHANG))150150+ continue;151151+152152+ /*153153+ * Child is no longer running, which is not expected.154154+ *155155+ * If it exited with a non-zero status, we explicitly forward156156+ * the child's status in case it exited with KSFT_SKIP.157157+ */158158+ if (WIFEXITED(status))159159+ exit(WEXITSTATUS(status));160160+ else161161+ TEST_ASSERT(false, "Child exited unexpectedly");162162+ }163163+}164164+135165int main(int argc, char **argv)136166{137167 uint32_t i;···178148 run_test(i); /* This function always exits */179149180150 pr_debug("%s: [%d] waiting semaphore\n", __func__, i);181181- sem_wait(sem);151151+ wait_for_child_setup(pid);182152 r = (rand() % DELAY_US_MAX) + 1;183153 pr_debug("%s: [%d] waiting %dus\n", __func__, i, r);184154 usleep(r);
···1919 u32 function;2020 u32 index;2121} mangled_cpuids[] = {2222+ /*2323+ * These entries depend on the vCPU's XCR0 register and IA32_XSS MSR,2424+ * which are not controlled for by this test.2525+ */2226 {.function = 0xd, .index = 0},2727+ {.function = 0xd, .index = 1},2328};24292530static void test_guest_cpuids(struct kvm_cpuid2 *guest_cpuid)
···99 "setup": [1010 "$IP link add dev $DUMMY type dummy || /bin/true"1111 ],1212- "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536",1313- "expExitCode": "2",1212+ "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_pie flows 65536",1313+ "expExitCode": "0",1414 "verifyCmd": "$TC qdisc show dev $DUMMY",1515- "matchPattern": "qdisc",1616- "matchCount": "0",1515+ "matchPattern": "qdisc fq_pie 1: root refcnt 2 limit 10240p flows 65536",1616+ "matchCount": "1",1717 "teardown": [1818 "$IP link del dev $DUMMY"1919 ]
+4-2
virt/kvm/kvm_main.c
···307307{308308 return kvm_make_all_cpus_request_except(kvm, req, NULL);309309}310310+EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request);310311311312#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL312313void kvm_flush_remote_tlbs(struct kvm *kvm)···29302929 goto out;29312930 if (signal_pending(current))29322931 goto out;29322932+ if (kvm_check_request(KVM_REQ_UNBLOCK, vcpu))29332933+ goto out;2933293429342935 ret = 0;29352936out:···29762973 goto out;29772974 }29782975 poll_end = cur = ktime_get();29792979- } while (single_task_running() && !need_resched() &&29802980- ktime_before(cur, stop));29762976+ } while (kvm_vcpu_can_poll(cur, stop));29812977 }2982297829832979 prepare_to_rcuwait(&vcpu->wait);
+6-10
virt/lib/irqbypass.c
···4040 if (prod->add_consumer)4141 ret = prod->add_consumer(prod, cons);42424343- if (ret)4444- goto err_add_consumer;4545-4646- ret = cons->add_producer(cons, prod);4747- if (ret)4848- goto err_add_producer;4343+ if (!ret) {4444+ ret = cons->add_producer(cons, prod);4545+ if (ret && prod->del_consumer)4646+ prod->del_consumer(prod, cons);4747+ }49485049 if (cons->start)5150 cons->start(cons);5251 if (prod->start)5352 prod->start(prod);5454-err_add_producer:5555- if (prod->del_consumer)5656- prod->del_consumer(prod, cons);5757-err_add_consumer:5353+5854 return ret;5955}6056